Test Report: KVM_Linux_crio 17847

                    
                      14a103756c0ada24883d7fe9ede608ef5810ed73:2023-12-25:32430
                    
                

Test fail (29/308)

Order failed test Duration
35 TestAddons/parallel/Ingress 153.16
49 TestAddons/StoppedEnableDisable 155.35
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 15.42
153 TestFunctional/parallel/MountCmd/specific-port 11.73
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 180.28
213 TestMultiNode/serial/PingHostFrom2Pods 3.33
220 TestMultiNode/serial/RestartKeepsNodes 687.74
222 TestMultiNode/serial/StopMultiNode 143
229 TestPreload 280.73
235 TestRunningBinaryUpgrade 172.11
271 TestStoppedBinaryUpgrade/Upgrade 306.45
284 TestStartStop/group/old-k8s-version/serial/Stop 140.69
287 TestStartStop/group/no-preload/serial/Stop 140.89
293 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
295 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
299 TestStartStop/group/embed-certs/serial/Stop 139.8
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.63
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.52
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.59
309 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.48
310 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.1
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 531.07
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 452.12
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 542.48
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 266.27
319 TestStartStop/group/newest-cni/serial/Stop 139.76
321 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 12.41
x
+
TestAddons/parallel/Ingress (153.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-294911 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-294911 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-294911 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6ba140c4-9acd-4f8f-a1b8-20213766cbf9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6ba140c4-9acd-4f8f-a1b8-20213766cbf9] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.0091208s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-294911 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.985597101s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-294911 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.148
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-294911 addons disable ingress-dns --alsologtostderr -v=1: (1.317967151s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-294911 addons disable ingress --alsologtostderr -v=1: (8.145245701s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-294911 -n addons-294911
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-294911 logs -n 25: (1.481330003s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-611991 | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC |                     |
	|         | -p download-only-611991                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC | 25 Dec 23 12:16 UTC |
	| delete  | -p download-only-611991                                                                     | download-only-611991 | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC | 25 Dec 23 12:16 UTC |
	| delete  | -p download-only-611991                                                                     | download-only-611991 | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC | 25 Dec 23 12:16 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-944204 | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC |                     |
	|         | binary-mirror-944204                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35281                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-944204                                                                     | binary-mirror-944204 | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC | 25 Dec 23 12:16 UTC |
	| addons  | disable dashboard -p                                                                        | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC |                     |
	|         | addons-294911                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC |                     |
	|         | addons-294911                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-294911 --wait=true                                                                | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC | 25 Dec 23 12:19 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-294911 addons                                                                        | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:19 UTC | 25 Dec 23 12:19 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-294911 ssh cat                                                                       | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:19 UTC | 25 Dec 23 12:19 UTC |
	|         | /opt/local-path-provisioner/pvc-d0b87c27-b3de-491f-9f3e-a2803f1d0726_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-294911 addons disable                                                                | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:19 UTC | 25 Dec 23 12:20 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-294911 ip                                                                            | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:19 UTC | 25 Dec 23 12:19 UTC |
	| addons  | addons-294911 addons disable                                                                | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:19 UTC | 25 Dec 23 12:19 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:19 UTC | 25 Dec 23 12:19 UTC |
	|         | -p addons-294911                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:19 UTC | 25 Dec 23 12:19 UTC |
	|         | addons-294911                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-294911 ssh curl -s                                                                   | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:19 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:19 UTC | 25 Dec 23 12:19 UTC |
	|         | addons-294911                                                                               |                      |         |         |                     |                     |
	| addons  | addons-294911 addons disable                                                                | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:19 UTC | 25 Dec 23 12:19 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:19 UTC | 25 Dec 23 12:19 UTC |
	|         | -p addons-294911                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-294911 addons                                                                        | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:20 UTC | 25 Dec 23 12:20 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-294911 addons                                                                        | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:20 UTC | 25 Dec 23 12:20 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-294911 ip                                                                            | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:21 UTC | 25 Dec 23 12:21 UTC |
	| addons  | addons-294911 addons disable                                                                | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:21 UTC | 25 Dec 23 12:21 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-294911 addons disable                                                                | addons-294911        | jenkins | v1.32.0 | 25 Dec 23 12:21 UTC | 25 Dec 23 12:21 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 12:16:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 12:16:32.298165 1450194 out.go:296] Setting OutFile to fd 1 ...
	I1225 12:16:32.298319 1450194 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:16:32.298328 1450194 out.go:309] Setting ErrFile to fd 2...
	I1225 12:16:32.298333 1450194 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:16:32.298523 1450194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 12:16:32.299137 1450194 out.go:303] Setting JSON to false
	I1225 12:16:32.299983 1450194 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":154745,"bootTime":1703351847,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 12:16:32.300053 1450194 start.go:138] virtualization: kvm guest
	I1225 12:16:32.302488 1450194 out.go:177] * [addons-294911] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 12:16:32.303976 1450194 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 12:16:32.305373 1450194 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 12:16:32.304003 1450194 notify.go:220] Checking for updates...
	I1225 12:16:32.308237 1450194 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:16:32.310046 1450194 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:16:32.311706 1450194 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 12:16:32.313113 1450194 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 12:16:32.314709 1450194 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 12:16:32.348239 1450194 out.go:177] * Using the kvm2 driver based on user configuration
	I1225 12:16:32.349777 1450194 start.go:298] selected driver: kvm2
	I1225 12:16:32.349811 1450194 start.go:902] validating driver "kvm2" against <nil>
	I1225 12:16:32.349823 1450194 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 12:16:32.350564 1450194 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 12:16:32.350653 1450194 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 12:16:32.366576 1450194 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 12:16:32.366635 1450194 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1225 12:16:32.366861 1450194 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 12:16:32.366927 1450194 cni.go:84] Creating CNI manager for ""
	I1225 12:16:32.366940 1450194 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 12:16:32.366952 1450194 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1225 12:16:32.366960 1450194 start_flags.go:323] config:
	{Name:addons-294911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-294911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:16:32.367080 1450194 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 12:16:32.370012 1450194 out.go:177] * Starting control plane node addons-294911 in cluster addons-294911
	I1225 12:16:32.371494 1450194 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 12:16:32.371552 1450194 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1225 12:16:32.371560 1450194 cache.go:56] Caching tarball of preloaded images
	I1225 12:16:32.371639 1450194 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 12:16:32.371649 1450194 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1225 12:16:32.371988 1450194 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/config.json ...
	I1225 12:16:32.372012 1450194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/config.json: {Name:mk994948d7c967adbb85b4e78fec0c10f14c4937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:16:32.372160 1450194 start.go:365] acquiring machines lock for addons-294911: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 12:16:32.372217 1450194 start.go:369] acquired machines lock for "addons-294911" in 42.981µs
	I1225 12:16:32.372235 1450194 start.go:93] Provisioning new machine with config: &{Name:addons-294911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-294911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 12:16:32.372327 1450194 start.go:125] createHost starting for "" (driver="kvm2")
	I1225 12:16:32.374114 1450194 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1225 12:16:32.374282 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:16:32.374339 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:16:32.389210 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42497
	I1225 12:16:32.389913 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:16:32.390598 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:16:32.390626 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:16:32.391119 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:16:32.391338 1450194 main.go:141] libmachine: (addons-294911) Calling .GetMachineName
	I1225 12:16:32.391465 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:16:32.391607 1450194 start.go:159] libmachine.API.Create for "addons-294911" (driver="kvm2")
	I1225 12:16:32.391640 1450194 client.go:168] LocalClient.Create starting
	I1225 12:16:32.391815 1450194 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem
	I1225 12:16:32.649158 1450194 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem
	I1225 12:16:32.722638 1450194 main.go:141] libmachine: Running pre-create checks...
	I1225 12:16:32.722669 1450194 main.go:141] libmachine: (addons-294911) Calling .PreCreateCheck
	I1225 12:16:32.723284 1450194 main.go:141] libmachine: (addons-294911) Calling .GetConfigRaw
	I1225 12:16:32.723831 1450194 main.go:141] libmachine: Creating machine...
	I1225 12:16:32.723861 1450194 main.go:141] libmachine: (addons-294911) Calling .Create
	I1225 12:16:32.724081 1450194 main.go:141] libmachine: (addons-294911) Creating KVM machine...
	I1225 12:16:32.725538 1450194 main.go:141] libmachine: (addons-294911) DBG | found existing default KVM network
	I1225 12:16:32.726348 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:32.726184 1450215 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a30}
	I1225 12:16:32.731939 1450194 main.go:141] libmachine: (addons-294911) DBG | trying to create private KVM network mk-addons-294911 192.168.39.0/24...
	I1225 12:16:32.812127 1450194 main.go:141] libmachine: (addons-294911) DBG | private KVM network mk-addons-294911 192.168.39.0/24 created
	I1225 12:16:32.812165 1450194 main.go:141] libmachine: (addons-294911) Setting up store path in /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911 ...
	I1225 12:16:32.812181 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:32.812079 1450215 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:16:32.812199 1450194 main.go:141] libmachine: (addons-294911) Building disk image from file:///home/jenkins/minikube-integration/17847-1442600/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I1225 12:16:32.812264 1450194 main.go:141] libmachine: (addons-294911) Downloading /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17847-1442600/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1225 12:16:33.070688 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:33.070541 1450215 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa...
	I1225 12:16:33.210752 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:33.210595 1450215 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/addons-294911.rawdisk...
	I1225 12:16:33.210817 1450194 main.go:141] libmachine: (addons-294911) DBG | Writing magic tar header
	I1225 12:16:33.210832 1450194 main.go:141] libmachine: (addons-294911) DBG | Writing SSH key tar header
	I1225 12:16:33.210843 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:33.210722 1450215 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911 ...
	I1225 12:16:33.210862 1450194 main.go:141] libmachine: (addons-294911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911
	I1225 12:16:33.210959 1450194 main.go:141] libmachine: (addons-294911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines
	I1225 12:16:33.210991 1450194 main.go:141] libmachine: (addons-294911) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911 (perms=drwx------)
	I1225 12:16:33.210999 1450194 main.go:141] libmachine: (addons-294911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:16:33.211009 1450194 main.go:141] libmachine: (addons-294911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600
	I1225 12:16:33.211016 1450194 main.go:141] libmachine: (addons-294911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1225 12:16:33.211025 1450194 main.go:141] libmachine: (addons-294911) DBG | Checking permissions on dir: /home/jenkins
	I1225 12:16:33.211034 1450194 main.go:141] libmachine: (addons-294911) DBG | Checking permissions on dir: /home
	I1225 12:16:33.211042 1450194 main.go:141] libmachine: (addons-294911) DBG | Skipping /home - not owner
	I1225 12:16:33.211058 1450194 main.go:141] libmachine: (addons-294911) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines (perms=drwxr-xr-x)
	I1225 12:16:33.211068 1450194 main.go:141] libmachine: (addons-294911) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube (perms=drwxr-xr-x)
	I1225 12:16:33.211093 1450194 main.go:141] libmachine: (addons-294911) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600 (perms=drwxrwxr-x)
	I1225 12:16:33.211105 1450194 main.go:141] libmachine: (addons-294911) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1225 12:16:33.211118 1450194 main.go:141] libmachine: (addons-294911) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1225 12:16:33.211143 1450194 main.go:141] libmachine: (addons-294911) Creating domain...
	I1225 12:16:33.212352 1450194 main.go:141] libmachine: (addons-294911) define libvirt domain using xml: 
	I1225 12:16:33.212388 1450194 main.go:141] libmachine: (addons-294911) <domain type='kvm'>
	I1225 12:16:33.212401 1450194 main.go:141] libmachine: (addons-294911)   <name>addons-294911</name>
	I1225 12:16:33.212417 1450194 main.go:141] libmachine: (addons-294911)   <memory unit='MiB'>4000</memory>
	I1225 12:16:33.212453 1450194 main.go:141] libmachine: (addons-294911)   <vcpu>2</vcpu>
	I1225 12:16:33.212494 1450194 main.go:141] libmachine: (addons-294911)   <features>
	I1225 12:16:33.212510 1450194 main.go:141] libmachine: (addons-294911)     <acpi/>
	I1225 12:16:33.212523 1450194 main.go:141] libmachine: (addons-294911)     <apic/>
	I1225 12:16:33.212535 1450194 main.go:141] libmachine: (addons-294911)     <pae/>
	I1225 12:16:33.212543 1450194 main.go:141] libmachine: (addons-294911)     
	I1225 12:16:33.212550 1450194 main.go:141] libmachine: (addons-294911)   </features>
	I1225 12:16:33.212564 1450194 main.go:141] libmachine: (addons-294911)   <cpu mode='host-passthrough'>
	I1225 12:16:33.212580 1450194 main.go:141] libmachine: (addons-294911)   
	I1225 12:16:33.212606 1450194 main.go:141] libmachine: (addons-294911)   </cpu>
	I1225 12:16:33.212622 1450194 main.go:141] libmachine: (addons-294911)   <os>
	I1225 12:16:33.212644 1450194 main.go:141] libmachine: (addons-294911)     <type>hvm</type>
	I1225 12:16:33.212660 1450194 main.go:141] libmachine: (addons-294911)     <boot dev='cdrom'/>
	I1225 12:16:33.212684 1450194 main.go:141] libmachine: (addons-294911)     <boot dev='hd'/>
	I1225 12:16:33.212700 1450194 main.go:141] libmachine: (addons-294911)     <bootmenu enable='no'/>
	I1225 12:16:33.212717 1450194 main.go:141] libmachine: (addons-294911)   </os>
	I1225 12:16:33.212730 1450194 main.go:141] libmachine: (addons-294911)   <devices>
	I1225 12:16:33.212744 1450194 main.go:141] libmachine: (addons-294911)     <disk type='file' device='cdrom'>
	I1225 12:16:33.212789 1450194 main.go:141] libmachine: (addons-294911)       <source file='/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/boot2docker.iso'/>
	I1225 12:16:33.212816 1450194 main.go:141] libmachine: (addons-294911)       <target dev='hdc' bus='scsi'/>
	I1225 12:16:33.212827 1450194 main.go:141] libmachine: (addons-294911)       <readonly/>
	I1225 12:16:33.212839 1450194 main.go:141] libmachine: (addons-294911)     </disk>
	I1225 12:16:33.212859 1450194 main.go:141] libmachine: (addons-294911)     <disk type='file' device='disk'>
	I1225 12:16:33.212882 1450194 main.go:141] libmachine: (addons-294911)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1225 12:16:33.212912 1450194 main.go:141] libmachine: (addons-294911)       <source file='/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/addons-294911.rawdisk'/>
	I1225 12:16:33.212925 1450194 main.go:141] libmachine: (addons-294911)       <target dev='hda' bus='virtio'/>
	I1225 12:16:33.212936 1450194 main.go:141] libmachine: (addons-294911)     </disk>
	I1225 12:16:33.212949 1450194 main.go:141] libmachine: (addons-294911)     <interface type='network'>
	I1225 12:16:33.212967 1450194 main.go:141] libmachine: (addons-294911)       <source network='mk-addons-294911'/>
	I1225 12:16:33.212985 1450194 main.go:141] libmachine: (addons-294911)       <model type='virtio'/>
	I1225 12:16:33.213000 1450194 main.go:141] libmachine: (addons-294911)     </interface>
	I1225 12:16:33.213018 1450194 main.go:141] libmachine: (addons-294911)     <interface type='network'>
	I1225 12:16:33.213034 1450194 main.go:141] libmachine: (addons-294911)       <source network='default'/>
	I1225 12:16:33.213047 1450194 main.go:141] libmachine: (addons-294911)       <model type='virtio'/>
	I1225 12:16:33.213061 1450194 main.go:141] libmachine: (addons-294911)     </interface>
	I1225 12:16:33.213079 1450194 main.go:141] libmachine: (addons-294911)     <serial type='pty'>
	I1225 12:16:33.213097 1450194 main.go:141] libmachine: (addons-294911)       <target port='0'/>
	I1225 12:16:33.213111 1450194 main.go:141] libmachine: (addons-294911)     </serial>
	I1225 12:16:33.213124 1450194 main.go:141] libmachine: (addons-294911)     <console type='pty'>
	I1225 12:16:33.213139 1450194 main.go:141] libmachine: (addons-294911)       <target type='serial' port='0'/>
	I1225 12:16:33.213151 1450194 main.go:141] libmachine: (addons-294911)     </console>
	I1225 12:16:33.213163 1450194 main.go:141] libmachine: (addons-294911)     <rng model='virtio'>
	I1225 12:16:33.213174 1450194 main.go:141] libmachine: (addons-294911)       <backend model='random'>/dev/random</backend>
	I1225 12:16:33.213186 1450194 main.go:141] libmachine: (addons-294911)     </rng>
	I1225 12:16:33.213193 1450194 main.go:141] libmachine: (addons-294911)     
	I1225 12:16:33.213198 1450194 main.go:141] libmachine: (addons-294911)     
	I1225 12:16:33.213206 1450194 main.go:141] libmachine: (addons-294911)   </devices>
	I1225 12:16:33.213215 1450194 main.go:141] libmachine: (addons-294911) </domain>
	I1225 12:16:33.213220 1450194 main.go:141] libmachine: (addons-294911) 
	I1225 12:16:33.218269 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:d9:3a:42 in network default
	I1225 12:16:33.219282 1450194 main.go:141] libmachine: (addons-294911) Ensuring networks are active...
	I1225 12:16:33.219319 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:33.220177 1450194 main.go:141] libmachine: (addons-294911) Ensuring network default is active
	I1225 12:16:33.220572 1450194 main.go:141] libmachine: (addons-294911) Ensuring network mk-addons-294911 is active
	I1225 12:16:33.221173 1450194 main.go:141] libmachine: (addons-294911) Getting domain xml...
	I1225 12:16:33.221849 1450194 main.go:141] libmachine: (addons-294911) Creating domain...
	I1225 12:16:34.468628 1450194 main.go:141] libmachine: (addons-294911) Waiting to get IP...
	I1225 12:16:34.469617 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:34.470062 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:34.470082 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:34.470025 1450215 retry.go:31] will retry after 213.366819ms: waiting for machine to come up
	I1225 12:16:34.685539 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:34.686030 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:34.686085 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:34.685980 1450215 retry.go:31] will retry after 245.296873ms: waiting for machine to come up
	I1225 12:16:34.932599 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:34.932977 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:34.933010 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:34.932925 1450215 retry.go:31] will retry after 369.116425ms: waiting for machine to come up
	I1225 12:16:35.304897 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:35.305480 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:35.305514 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:35.305439 1450215 retry.go:31] will retry after 373.491824ms: waiting for machine to come up
	I1225 12:16:35.681262 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:35.681778 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:35.681810 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:35.681728 1450215 retry.go:31] will retry after 693.821898ms: waiting for machine to come up
	I1225 12:16:36.376803 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:36.377349 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:36.377382 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:36.377291 1450215 retry.go:31] will retry after 613.47827ms: waiting for machine to come up
	I1225 12:16:36.992239 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:36.992714 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:36.992745 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:36.992658 1450215 retry.go:31] will retry after 882.32752ms: waiting for machine to come up
	I1225 12:16:37.876418 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:37.876990 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:37.877027 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:37.876925 1450215 retry.go:31] will retry after 1.384968122s: waiting for machine to come up
	I1225 12:16:39.263907 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:39.264329 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:39.264362 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:39.264265 1450215 retry.go:31] will retry after 1.667747229s: waiting for machine to come up
	I1225 12:16:40.933306 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:40.933766 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:40.933802 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:40.933700 1450215 retry.go:31] will retry after 2.165413703s: waiting for machine to come up
	I1225 12:16:43.101442 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:43.101952 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:43.101975 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:43.101910 1450215 retry.go:31] will retry after 2.545968519s: waiting for machine to come up
	I1225 12:16:45.650824 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:45.651285 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:45.651318 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:45.651243 1450215 retry.go:31] will retry after 2.827912446s: waiting for machine to come up
	I1225 12:16:48.480930 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:48.481337 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:48.481364 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:48.481289 1450215 retry.go:31] will retry after 3.723737625s: waiting for machine to come up
	I1225 12:16:52.209065 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:52.209547 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find current IP address of domain addons-294911 in network mk-addons-294911
	I1225 12:16:52.209577 1450194 main.go:141] libmachine: (addons-294911) DBG | I1225 12:16:52.209505 1450215 retry.go:31] will retry after 5.387371635s: waiting for machine to come up
	I1225 12:16:57.598743 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:57.599078 1450194 main.go:141] libmachine: (addons-294911) Found IP for machine: 192.168.39.148
	I1225 12:16:57.599125 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has current primary IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:57.599140 1450194 main.go:141] libmachine: (addons-294911) Reserving static IP address...
	I1225 12:16:57.599580 1450194 main.go:141] libmachine: (addons-294911) DBG | unable to find host DHCP lease matching {name: "addons-294911", mac: "52:54:00:a6:01:f9", ip: "192.168.39.148"} in network mk-addons-294911
	I1225 12:16:57.835760 1450194 main.go:141] libmachine: (addons-294911) DBG | Getting to WaitForSSH function...
	I1225 12:16:57.835849 1450194 main.go:141] libmachine: (addons-294911) Reserved static IP address: 192.168.39.148
	I1225 12:16:57.835864 1450194 main.go:141] libmachine: (addons-294911) Waiting for SSH to be available...
	I1225 12:16:57.839020 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:57.839550 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:57.839586 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:57.839749 1450194 main.go:141] libmachine: (addons-294911) DBG | Using SSH client type: external
	I1225 12:16:57.839782 1450194 main.go:141] libmachine: (addons-294911) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa (-rw-------)
	I1225 12:16:57.839818 1450194 main.go:141] libmachine: (addons-294911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 12:16:57.839836 1450194 main.go:141] libmachine: (addons-294911) DBG | About to run SSH command:
	I1225 12:16:57.839849 1450194 main.go:141] libmachine: (addons-294911) DBG | exit 0
	I1225 12:16:57.934490 1450194 main.go:141] libmachine: (addons-294911) DBG | SSH cmd err, output: <nil>: 
	I1225 12:16:57.934854 1450194 main.go:141] libmachine: (addons-294911) KVM machine creation complete!
	I1225 12:16:57.935202 1450194 main.go:141] libmachine: (addons-294911) Calling .GetConfigRaw
	I1225 12:16:57.941545 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:16:57.941870 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:16:57.942113 1450194 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1225 12:16:57.942134 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:16:57.943749 1450194 main.go:141] libmachine: Detecting operating system of created instance...
	I1225 12:16:57.943777 1450194 main.go:141] libmachine: Waiting for SSH to be available...
	I1225 12:16:57.943787 1450194 main.go:141] libmachine: Getting to WaitForSSH function...
	I1225 12:16:57.943797 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:16:57.946695 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:57.947014 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:57.947039 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:57.947192 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:16:57.947398 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:57.947595 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:57.947741 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:16:57.947923 1450194 main.go:141] libmachine: Using SSH client type: native
	I1225 12:16:57.948275 1450194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1225 12:16:57.948290 1450194 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1225 12:16:58.073851 1450194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 12:16:58.073877 1450194 main.go:141] libmachine: Detecting the provisioner...
	I1225 12:16:58.073890 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:16:58.076976 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.077390 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:58.077440 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.077531 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:16:58.077793 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:58.077982 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:58.078109 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:16:58.078306 1450194 main.go:141] libmachine: Using SSH client type: native
	I1225 12:16:58.078812 1450194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1225 12:16:58.078830 1450194 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1225 12:16:58.203400 1450194 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1225 12:16:58.203494 1450194 main.go:141] libmachine: found compatible host: buildroot
	I1225 12:16:58.203506 1450194 main.go:141] libmachine: Provisioning with buildroot...
	I1225 12:16:58.203514 1450194 main.go:141] libmachine: (addons-294911) Calling .GetMachineName
	I1225 12:16:58.203815 1450194 buildroot.go:166] provisioning hostname "addons-294911"
	I1225 12:16:58.203860 1450194 main.go:141] libmachine: (addons-294911) Calling .GetMachineName
	I1225 12:16:58.204078 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:16:58.206852 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.207272 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:58.207324 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.207509 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:16:58.207736 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:58.207963 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:58.208142 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:16:58.208320 1450194 main.go:141] libmachine: Using SSH client type: native
	I1225 12:16:58.208687 1450194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1225 12:16:58.208709 1450194 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-294911 && echo "addons-294911" | sudo tee /etc/hostname
	I1225 12:16:58.343124 1450194 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-294911
	
	I1225 12:16:58.343157 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:16:58.346026 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.346509 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:58.346545 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.346715 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:16:58.346931 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:58.347149 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:58.347288 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:16:58.347460 1450194 main.go:141] libmachine: Using SSH client type: native
	I1225 12:16:58.347957 1450194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1225 12:16:58.347987 1450194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-294911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-294911/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-294911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 12:16:58.482853 1450194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 12:16:58.482886 1450194 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 12:16:58.482924 1450194 buildroot.go:174] setting up certificates
	I1225 12:16:58.482938 1450194 provision.go:83] configureAuth start
	I1225 12:16:58.482955 1450194 main.go:141] libmachine: (addons-294911) Calling .GetMachineName
	I1225 12:16:58.483341 1450194 main.go:141] libmachine: (addons-294911) Calling .GetIP
	I1225 12:16:58.486278 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.486630 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:58.486666 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.486850 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:16:58.489295 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.489726 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:58.489757 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.489962 1450194 provision.go:138] copyHostCerts
	I1225 12:16:58.490064 1450194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 12:16:58.490263 1450194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 12:16:58.490366 1450194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 12:16:58.490471 1450194 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.addons-294911 san=[192.168.39.148 192.168.39.148 localhost 127.0.0.1 minikube addons-294911]
	I1225 12:16:58.653051 1450194 provision.go:172] copyRemoteCerts
	I1225 12:16:58.653137 1450194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 12:16:58.653171 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:16:58.656259 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.656617 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:58.656639 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.656863 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:16:58.657060 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:58.657190 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:16:58.657343 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:16:58.748496 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 12:16:58.771697 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1225 12:16:58.794641 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 12:16:58.817490 1450194 provision.go:86] duration metric: configureAuth took 334.53517ms
	I1225 12:16:58.817520 1450194 buildroot.go:189] setting minikube options for container-runtime
	I1225 12:16:58.817733 1450194 config.go:182] Loaded profile config "addons-294911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:16:58.817833 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:16:58.820641 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.820936 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:58.820964 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:58.821132 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:16:58.821359 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:58.821534 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:58.821701 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:16:58.821949 1450194 main.go:141] libmachine: Using SSH client type: native
	I1225 12:16:58.822286 1450194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1225 12:16:58.822302 1450194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 12:16:59.410927 1450194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 12:16:59.410963 1450194 main.go:141] libmachine: Checking connection to Docker...
	I1225 12:16:59.411003 1450194 main.go:141] libmachine: (addons-294911) Calling .GetURL
	I1225 12:16:59.412523 1450194 main.go:141] libmachine: (addons-294911) DBG | Using libvirt version 6000000
	I1225 12:16:59.414846 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.415195 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:59.415225 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.415407 1450194 main.go:141] libmachine: Docker is up and running!
	I1225 12:16:59.415423 1450194 main.go:141] libmachine: Reticulating splines...
	I1225 12:16:59.415430 1450194 client.go:171] LocalClient.Create took 27.02378016s
	I1225 12:16:59.415451 1450194 start.go:167] duration metric: libmachine.API.Create for "addons-294911" took 27.023847671s
	I1225 12:16:59.415470 1450194 start.go:300] post-start starting for "addons-294911" (driver="kvm2")
	I1225 12:16:59.415506 1450194 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 12:16:59.415523 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:16:59.415786 1450194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 12:16:59.415812 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:16:59.417976 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.418272 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:59.418302 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.418427 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:16:59.418631 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:59.418801 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:16:59.418942 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:16:59.512446 1450194 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 12:16:59.516848 1450194 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 12:16:59.516883 1450194 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 12:16:59.516960 1450194 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 12:16:59.516987 1450194 start.go:303] post-start completed in 101.511662ms
	I1225 12:16:59.517029 1450194 main.go:141] libmachine: (addons-294911) Calling .GetConfigRaw
	I1225 12:16:59.517641 1450194 main.go:141] libmachine: (addons-294911) Calling .GetIP
	I1225 12:16:59.520111 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.520488 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:59.520523 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.520739 1450194 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/config.json ...
	I1225 12:16:59.520926 1450194 start.go:128] duration metric: createHost completed in 27.148585111s
	I1225 12:16:59.520976 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:16:59.523137 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.523448 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:59.523476 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.523583 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:16:59.523775 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:59.523979 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:59.524204 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:16:59.524440 1450194 main.go:141] libmachine: Using SSH client type: native
	I1225 12:16:59.524913 1450194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1225 12:16:59.524933 1450194 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 12:16:59.651186 1450194 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703506619.634586512
	
	I1225 12:16:59.651212 1450194 fix.go:206] guest clock: 1703506619.634586512
	I1225 12:16:59.651219 1450194 fix.go:219] Guest: 2023-12-25 12:16:59.634586512 +0000 UTC Remote: 2023-12-25 12:16:59.520962176 +0000 UTC m=+27.272829730 (delta=113.624336ms)
	I1225 12:16:59.651240 1450194 fix.go:190] guest clock delta is within tolerance: 113.624336ms
	I1225 12:16:59.651245 1450194 start.go:83] releasing machines lock for "addons-294911", held for 27.279018681s
	I1225 12:16:59.651265 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:16:59.651553 1450194 main.go:141] libmachine: (addons-294911) Calling .GetIP
	I1225 12:16:59.654245 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.654676 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:59.654700 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.654863 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:16:59.655400 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:16:59.655585 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:16:59.655714 1450194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 12:16:59.655763 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:16:59.655796 1450194 ssh_runner.go:195] Run: cat /version.json
	I1225 12:16:59.655822 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:16:59.658515 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.658755 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.658815 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:59.658845 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.659000 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:16:59.659174 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:59.659254 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:16:59.659277 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:16:59.659327 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:16:59.659447 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:16:59.659528 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:16:59.659609 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:16:59.659724 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:16:59.659853 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:16:59.775455 1450194 ssh_runner.go:195] Run: systemctl --version
	I1225 12:16:59.781226 1450194 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 12:16:59.936619 1450194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 12:16:59.943155 1450194 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 12:16:59.943227 1450194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 12:16:59.958703 1450194 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 12:16:59.958731 1450194 start.go:475] detecting cgroup driver to use...
	I1225 12:16:59.958812 1450194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 12:16:59.974846 1450194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 12:16:59.986771 1450194 docker.go:203] disabling cri-docker service (if available) ...
	I1225 12:16:59.986849 1450194 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 12:16:59.998945 1450194 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 12:17:00.012719 1450194 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 12:17:00.132684 1450194 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 12:17:00.250938 1450194 docker.go:219] disabling docker service ...
	I1225 12:17:00.251025 1450194 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 12:17:00.264640 1450194 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 12:17:00.277164 1450194 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 12:17:00.385682 1450194 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 12:17:00.493353 1450194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 12:17:00.507778 1450194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 12:17:00.525359 1450194 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 12:17:00.525423 1450194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:17:00.535899 1450194 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 12:17:00.535990 1450194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:17:00.546571 1450194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:17:00.556844 1450194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:17:00.567318 1450194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 12:17:00.578273 1450194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 12:17:00.587625 1450194 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 12:17:00.587697 1450194 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 12:17:00.601172 1450194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 12:17:00.611118 1450194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 12:17:00.719170 1450194 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 12:17:00.893287 1450194 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 12:17:00.893407 1450194 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 12:17:00.898630 1450194 start.go:543] Will wait 60s for crictl version
	I1225 12:17:00.898738 1450194 ssh_runner.go:195] Run: which crictl
	I1225 12:17:00.902703 1450194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 12:17:00.939316 1450194 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 12:17:00.939424 1450194 ssh_runner.go:195] Run: crio --version
	I1225 12:17:00.991137 1450194 ssh_runner.go:195] Run: crio --version
	I1225 12:17:01.040801 1450194 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 12:17:01.042385 1450194 main.go:141] libmachine: (addons-294911) Calling .GetIP
	I1225 12:17:01.045146 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:01.045510 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:01.045533 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:01.045775 1450194 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 12:17:01.050368 1450194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 12:17:01.062761 1450194 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 12:17:01.062821 1450194 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 12:17:01.098270 1450194 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 12:17:01.098348 1450194 ssh_runner.go:195] Run: which lz4
	I1225 12:17:01.102323 1450194 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 12:17:01.106789 1450194 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 12:17:01.106835 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 12:17:02.889635 1450194 crio.go:444] Took 1.787342 seconds to copy over tarball
	I1225 12:17:02.889706 1450194 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 12:17:05.955051 1450194 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.06531298s)
	I1225 12:17:05.955110 1450194 crio.go:451] Took 3.065448 seconds to extract the tarball
	I1225 12:17:05.955121 1450194 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 12:17:05.998797 1450194 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 12:17:06.075353 1450194 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 12:17:06.075379 1450194 cache_images.go:84] Images are preloaded, skipping loading
	I1225 12:17:06.075477 1450194 ssh_runner.go:195] Run: crio config
	I1225 12:17:06.138333 1450194 cni.go:84] Creating CNI manager for ""
	I1225 12:17:06.138357 1450194 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 12:17:06.138385 1450194 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 12:17:06.138477 1450194 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.148 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-294911 NodeName:addons-294911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 12:17:06.138651 1450194 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-294911"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 12:17:06.138828 1450194 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-294911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-294911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 12:17:06.138928 1450194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 12:17:06.152903 1450194 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 12:17:06.153007 1450194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 12:17:06.163269 1450194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1225 12:17:06.180874 1450194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 12:17:06.198294 1450194 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1225 12:17:06.216968 1450194 ssh_runner.go:195] Run: grep 192.168.39.148	control-plane.minikube.internal$ /etc/hosts
	I1225 12:17:06.221433 1450194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 12:17:06.234569 1450194 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911 for IP: 192.168.39.148
	I1225 12:17:06.234609 1450194 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:06.234766 1450194 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 12:17:06.367413 1450194 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt ...
	I1225 12:17:06.367452 1450194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt: {Name:mk90c8de93b00137dea177e1e422815884f2d9e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:06.367621 1450194 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key ...
	I1225 12:17:06.367632 1450194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key: {Name:mkd23001b2ba6d27934de752f29f136e6c10dd3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:06.367703 1450194 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 12:17:06.602660 1450194 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt ...
	I1225 12:17:06.602699 1450194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt: {Name:mk8b541ab3f758db85b02167c8bc4185c62dd54f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:06.602871 1450194 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key ...
	I1225 12:17:06.602882 1450194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key: {Name:mk218dffd31b0d21eb33d72ca69ecb658488976e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:06.602993 1450194 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.key
	I1225 12:17:06.603015 1450194 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt with IP's: []
	I1225 12:17:06.697079 1450194 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt ...
	I1225 12:17:06.697120 1450194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: {Name:mk0e3daa0e7bf6327b0b012fae232ebca168e1fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:06.697304 1450194 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.key ...
	I1225 12:17:06.697316 1450194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.key: {Name:mkd760ba9ed2fa7385ea9e85b568cb4d7dd6c80c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:06.697386 1450194 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/apiserver.key.b8daa033
	I1225 12:17:06.697405 1450194 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/apiserver.crt.b8daa033 with IP's: [192.168.39.148 10.96.0.1 127.0.0.1 10.0.0.1]
	I1225 12:17:06.813707 1450194 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/apiserver.crt.b8daa033 ...
	I1225 12:17:06.813747 1450194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/apiserver.crt.b8daa033: {Name:mkf59e51e312c9b944f3c9822fd6e3b024fad0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:06.813914 1450194 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/apiserver.key.b8daa033 ...
	I1225 12:17:06.813928 1450194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/apiserver.key.b8daa033: {Name:mk98dbaf62613f1ccac62f79dd01425638748cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:06.814002 1450194 certs.go:337] copying /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/apiserver.crt.b8daa033 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/apiserver.crt
	I1225 12:17:06.814071 1450194 certs.go:341] copying /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/apiserver.key.b8daa033 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/apiserver.key
	I1225 12:17:06.814117 1450194 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/proxy-client.key
	I1225 12:17:06.814141 1450194 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/proxy-client.crt with IP's: []
	I1225 12:17:07.074825 1450194 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/proxy-client.crt ...
	I1225 12:17:07.074870 1450194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/proxy-client.crt: {Name:mkf427d4f2beb1a44f9a527a88b98d0682917de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:07.075079 1450194 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/proxy-client.key ...
	I1225 12:17:07.075098 1450194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/proxy-client.key: {Name:mkbdc0c24a03e54b883cc92f28982b90f1540815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:07.075311 1450194 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 12:17:07.075365 1450194 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 12:17:07.075407 1450194 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 12:17:07.075447 1450194 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 12:17:07.076106 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 12:17:07.099868 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1225 12:17:07.123925 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 12:17:07.147740 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 12:17:07.169904 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 12:17:07.195421 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 12:17:07.218986 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 12:17:07.243695 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 12:17:07.266749 1450194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 12:17:07.290212 1450194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 12:17:07.308768 1450194 ssh_runner.go:195] Run: openssl version
	I1225 12:17:07.365545 1450194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 12:17:07.376577 1450194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:17:07.381460 1450194 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:17:07.381525 1450194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:17:07.387285 1450194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 12:17:07.397784 1450194 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 12:17:07.402017 1450194 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1225 12:17:07.402130 1450194 kubeadm.go:404] StartCluster: {Name:addons-294911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-294911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:17:07.402220 1450194 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 12:17:07.402271 1450194 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 12:17:07.440957 1450194 cri.go:89] found id: ""
	I1225 12:17:07.441099 1450194 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 12:17:07.451011 1450194 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 12:17:07.460933 1450194 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 12:17:07.470823 1450194 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 12:17:07.470893 1450194 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1225 12:17:07.525051 1450194 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1225 12:17:07.525193 1450194 kubeadm.go:322] [preflight] Running pre-flight checks
	I1225 12:17:07.667661 1450194 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 12:17:07.667845 1450194 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 12:17:07.667970 1450194 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1225 12:17:07.900803 1450194 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 12:17:07.971143 1450194 out.go:204]   - Generating certificates and keys ...
	I1225 12:17:07.971269 1450194 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1225 12:17:07.971378 1450194 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1225 12:17:08.125495 1450194 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1225 12:17:08.598889 1450194 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1225 12:17:08.773084 1450194 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1225 12:17:08.937296 1450194 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1225 12:17:09.149948 1450194 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1225 12:17:09.150145 1450194 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-294911 localhost] and IPs [192.168.39.148 127.0.0.1 ::1]
	I1225 12:17:09.241808 1450194 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1225 12:17:09.241961 1450194 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-294911 localhost] and IPs [192.168.39.148 127.0.0.1 ::1]
	I1225 12:17:09.308718 1450194 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1225 12:17:09.845922 1450194 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1225 12:17:10.004436 1450194 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1225 12:17:10.004545 1450194 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 12:17:10.072883 1450194 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 12:17:10.183781 1450194 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 12:17:10.308496 1450194 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 12:17:10.566530 1450194 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 12:17:10.567192 1450194 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 12:17:10.570639 1450194 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 12:17:10.572685 1450194 out.go:204]   - Booting up control plane ...
	I1225 12:17:10.572803 1450194 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 12:17:10.572901 1450194 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 12:17:10.573647 1450194 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 12:17:10.589526 1450194 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 12:17:10.590605 1450194 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 12:17:10.590718 1450194 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1225 12:17:10.718182 1450194 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1225 12:17:18.719780 1450194 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002918 seconds
	I1225 12:17:18.719905 1450194 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 12:17:18.734519 1450194 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 12:17:19.270960 1450194 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 12:17:19.271200 1450194 kubeadm.go:322] [mark-control-plane] Marking the node addons-294911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 12:17:19.781905 1450194 kubeadm.go:322] [bootstrap-token] Using token: 156uv7.k8euh4o53m3eloa5
	I1225 12:17:19.783680 1450194 out.go:204]   - Configuring RBAC rules ...
	I1225 12:17:19.783818 1450194 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 12:17:19.791720 1450194 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 12:17:19.802937 1450194 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 12:17:19.808035 1450194 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 12:17:19.811909 1450194 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 12:17:19.817848 1450194 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 12:17:19.841291 1450194 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 12:17:20.084254 1450194 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1225 12:17:20.196980 1450194 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1225 12:17:20.197998 1450194 kubeadm.go:322] 
	I1225 12:17:20.198088 1450194 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1225 12:17:20.198101 1450194 kubeadm.go:322] 
	I1225 12:17:20.198177 1450194 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1225 12:17:20.198185 1450194 kubeadm.go:322] 
	I1225 12:17:20.198206 1450194 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1225 12:17:20.198300 1450194 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 12:17:20.198403 1450194 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 12:17:20.198423 1450194 kubeadm.go:322] 
	I1225 12:17:20.198506 1450194 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1225 12:17:20.198516 1450194 kubeadm.go:322] 
	I1225 12:17:20.198589 1450194 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 12:17:20.198602 1450194 kubeadm.go:322] 
	I1225 12:17:20.198662 1450194 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1225 12:17:20.198768 1450194 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 12:17:20.198856 1450194 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 12:17:20.198867 1450194 kubeadm.go:322] 
	I1225 12:17:20.198972 1450194 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 12:17:20.199059 1450194 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1225 12:17:20.199069 1450194 kubeadm.go:322] 
	I1225 12:17:20.199194 1450194 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 156uv7.k8euh4o53m3eloa5 \
	I1225 12:17:20.199354 1450194 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 \
	I1225 12:17:20.199407 1450194 kubeadm.go:322] 	--control-plane 
	I1225 12:17:20.199420 1450194 kubeadm.go:322] 
	I1225 12:17:20.199543 1450194 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1225 12:17:20.199555 1450194 kubeadm.go:322] 
	I1225 12:17:20.199676 1450194 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 156uv7.k8euh4o53m3eloa5 \
	I1225 12:17:20.199807 1450194 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 12:17:20.199943 1450194 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 12:17:20.199970 1450194 cni.go:84] Creating CNI manager for ""
	I1225 12:17:20.199988 1450194 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 12:17:20.201840 1450194 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 12:17:20.203316 1450194 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 12:17:20.230584 1450194 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 12:17:20.277992 1450194 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 12:17:20.278079 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:20.278162 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da minikube.k8s.io/name=addons-294911 minikube.k8s.io/updated_at=2023_12_25T12_17_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:20.588748 1450194 ops.go:34] apiserver oom_adj: -16
	I1225 12:17:20.589426 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:21.089666 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:21.589913 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:22.089696 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:22.590474 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:23.089709 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:23.589756 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:24.090177 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:24.589556 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:25.090235 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:25.589564 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:26.089920 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:26.590169 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:27.089856 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:27.590180 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:28.089604 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:28.590424 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:29.089458 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:29.590351 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:30.090330 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:30.590397 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:31.089790 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:31.590099 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:32.089540 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:32.590270 1450194 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:17:32.695238 1450194 kubeadm.go:1088] duration metric: took 12.41723226s to wait for elevateKubeSystemPrivileges.
	I1225 12:17:32.695275 1450194 kubeadm.go:406] StartCluster complete in 25.293158086s
	I1225 12:17:32.695297 1450194 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:32.695431 1450194 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:17:32.695856 1450194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:17:32.696058 1450194 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 12:17:32.696208 1450194 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I1225 12:17:32.696345 1450194 addons.go:69] Setting gcp-auth=true in profile "addons-294911"
	I1225 12:17:32.696359 1450194 addons.go:69] Setting helm-tiller=true in profile "addons-294911"
	I1225 12:17:32.696361 1450194 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-294911"
	I1225 12:17:32.696401 1450194 config.go:182] Loaded profile config "addons-294911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:17:32.696401 1450194 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-294911"
	I1225 12:17:32.696346 1450194 addons.go:69] Setting yakd=true in profile "addons-294911"
	I1225 12:17:32.696433 1450194 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-294911"
	I1225 12:17:32.696439 1450194 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-294911"
	I1225 12:17:32.696448 1450194 addons.go:69] Setting default-storageclass=true in profile "addons-294911"
	I1225 12:17:32.696461 1450194 addons.go:69] Setting registry=true in profile "addons-294911"
	I1225 12:17:32.696464 1450194 addons.go:237] Setting addon helm-tiller=true in "addons-294911"
	I1225 12:17:32.696473 1450194 addons.go:237] Setting addon registry=true in "addons-294911"
	I1225 12:17:32.696481 1450194 addons.go:69] Setting inspektor-gadget=true in profile "addons-294911"
	I1225 12:17:32.696490 1450194 addons.go:237] Setting addon inspektor-gadget=true in "addons-294911"
	I1225 12:17:32.696492 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.696494 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.696508 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.696520 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.696529 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.696440 1450194 addons.go:237] Setting addon yakd=true in "addons-294911"
	I1225 12:17:32.696635 1450194 addons.go:69] Setting cloud-spanner=true in profile "addons-294911"
	I1225 12:17:32.696658 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.696664 1450194 addons.go:237] Setting addon cloud-spanner=true in "addons-294911"
	I1225 12:17:32.696714 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.696724 1450194 addons.go:69] Setting metrics-server=true in profile "addons-294911"
	I1225 12:17:32.696754 1450194 addons.go:237] Setting addon metrics-server=true in "addons-294911"
	I1225 12:17:32.696792 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.696924 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.696926 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.696470 1450194 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-294911"
	I1225 12:17:32.696960 1450194 addons.go:69] Setting volumesnapshots=true in profile "addons-294911"
	I1225 12:17:32.696965 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.696972 1450194 addons.go:237] Setting addon volumesnapshots=true in "addons-294911"
	I1225 12:17:32.697004 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.697122 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.697155 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.697176 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.696424 1450194 mustload.go:65] Loading cluster: addons-294911
	I1225 12:17:32.697215 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.697305 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.697322 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.697334 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.697337 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.697350 1450194 config.go:182] Loaded profile config "addons-294911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:17:32.697388 1450194 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-294911"
	I1225 12:17:32.697411 1450194 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-294911"
	I1225 12:17:32.697423 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.697450 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.697593 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.697622 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.696453 1450194 addons.go:69] Setting storage-provisioner=true in profile "addons-294911"
	I1225 12:17:32.697710 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.697714 1450194 addons.go:237] Setting addon storage-provisioner=true in "addons-294911"
	I1225 12:17:32.697727 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.697753 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.697410 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.697785 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.697795 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.697808 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.697395 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.697997 1450194 addons.go:69] Setting ingress=true in profile "addons-294911"
	I1225 12:17:32.698018 1450194 addons.go:237] Setting addon ingress=true in "addons-294911"
	I1225 12:17:32.698029 1450194 addons.go:69] Setting ingress-dns=true in profile "addons-294911"
	I1225 12:17:32.698039 1450194 addons.go:237] Setting addon ingress-dns=true in "addons-294911"
	I1225 12:17:32.697180 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.698077 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.698461 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.698795 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.699047 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.699099 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.699257 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.699293 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.718309 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I1225 12:17:32.718350 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I1225 12:17:32.718553 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I1225 12:17:32.718627 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46061
	I1225 12:17:32.718753 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37539
	I1225 12:17:32.718771 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.719131 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.719166 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.719244 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.719346 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.719766 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.719791 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.719932 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.719952 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.720070 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.720092 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.720450 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.720504 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.720510 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.720529 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.720626 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.720940 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.721505 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.721544 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.727292 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.727503 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.727520 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.727591 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.727663 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.728517 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.728574 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.730765 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.730947 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.738907 1450194 addons.go:237] Setting addon default-storageclass=true in "addons-294911"
	I1225 12:17:32.738964 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.739406 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.739447 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.739732 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.740367 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.740410 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.746309 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43087
	I1225 12:17:32.746909 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.747511 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.747543 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.748093 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.748694 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.748767 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.750602 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I1225 12:17:32.751181 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.751302 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I1225 12:17:32.751442 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34561
	I1225 12:17:32.751778 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.752006 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38659
	I1225 12:17:32.752272 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.752293 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.752356 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.752455 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.752478 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.752669 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.752729 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.752777 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.752791 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.752925 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.753146 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.753205 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.753820 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.753851 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.753861 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.753888 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.754133 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.754152 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.754756 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.755808 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.755875 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44635
	I1225 12:17:32.756314 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I1225 12:17:32.756320 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.756534 1450194 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-294911"
	I1225 12:17:32.756579 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.756821 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.757246 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.757291 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.757334 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.757351 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.757763 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.758317 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.758344 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.758566 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.760894 1450194 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I1225 12:17:32.759115 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.762383 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.762614 1450194 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I1225 12:17:32.762629 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1225 12:17:32.762647 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.763377 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.763620 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.766490 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:32.766919 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.766957 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.767261 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.767329 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.767353 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.767629 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.767915 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.768122 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.768264 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.774070 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I1225 12:17:32.775125 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.775851 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.775881 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.776309 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.776738 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.778795 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.780641 1450194 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1225 12:17:32.779660 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I1225 12:17:32.782206 1450194 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1225 12:17:32.782222 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1225 12:17:32.782247 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.783135 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33077
	I1225 12:17:32.783365 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.783880 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.784339 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.784358 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.784802 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.785042 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.785274 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.785291 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.786145 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.786190 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.786738 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.786765 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.786962 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.787214 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.787366 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.787535 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.789476 1450194 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1225 12:17:32.787857 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38585
	I1225 12:17:32.787977 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.789276 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.790799 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
	I1225 12:17:32.792587 1450194 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1225 12:17:32.791570 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.791948 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.792195 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.792305 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
	I1225 12:17:32.796785 1450194 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1225 12:17:32.795782 1450194 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1225 12:17:32.796472 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.796723 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46603
	I1225 12:17:32.797168 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.797319 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.798550 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.800356 1450194 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1225 12:17:32.798948 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.799078 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.799370 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.799515 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.799907 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39487
	I1225 12:17:32.800169 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37033
	I1225 12:17:32.803218 1450194 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1225 12:17:32.805038 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I1225 12:17:32.802014 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.806365 1450194 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1225 12:17:32.802269 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.802517 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.802605 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.802667 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.805069 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33541
	I1225 12:17:32.801788 1450194 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 12:17:32.806143 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I1225 12:17:32.802341 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.807024 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.807070 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.809495 1450194 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1225 12:17:32.808077 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 12:17:32.808083 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.808374 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.808450 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.808581 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.808782 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.808879 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.808985 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.809162 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.810528 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46453
	I1225 12:17:32.813186 1450194 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1225 12:17:32.810966 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.810708 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.810995 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.810539 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I1225 12:17:32.811040 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.811050 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.811088 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.811499 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.811520 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.811530 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.812420 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.813113 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.814811 1450194 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1225 12:17:32.814829 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1225 12:17:32.814852 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.814913 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.815013 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.816216 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.818284 1450194 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1225 12:17:32.816217 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.816327 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.816579 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.816603 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.816895 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.817062 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.817165 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.817603 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.819949 1450194 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1225 12:17:32.820647 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.821350 1450194 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1225 12:17:32.821372 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.821434 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1225 12:17:32.821437 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.821461 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.823161 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.823233 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.823240 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.823249 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.824010 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.824036 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.824054 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.824161 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.823254 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.823262 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.824264 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.824288 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.823374 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.824304 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.823639 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.824360 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.824644 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.824727 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.824760 1450194 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1225 12:17:32.825660 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.825674 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.825701 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.826041 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.826996 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.826223 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.827235 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.827540 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.827848 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:32.829379 1450194 out.go:177]   - Using image docker.io/registry:2.8.3
	I1225 12:17:32.827886 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:32.828349 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.829521 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.830266 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.833548 1450194 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1225 12:17:32.832164 1450194 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1225 12:17:32.832211 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.832812 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.835046 1450194 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I1225 12:17:32.835083 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.836287 1450194 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1225 12:17:32.836366 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1225 12:17:32.836602 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.839204 1450194 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1225 12:17:32.839229 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1225 12:17:32.839253 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.838015 1450194 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1225 12:17:32.838043 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.838306 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.839292 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1225 12:17:32.839322 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.841477 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38629
	I1225 12:17:32.842054 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.842792 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.842823 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.843219 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.843495 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.846068 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.846233 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.846346 1450194 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 12:17:32.846360 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 12:17:32.846379 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.847285 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.847989 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.848220 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.848261 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.848559 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.848923 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.848939 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.848958 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.849101 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.849487 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.849509 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.849533 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.850019 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.850290 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.850378 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.850465 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.850483 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.850516 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.850551 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.850641 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.850694 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.850830 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.850855 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.850966 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.851023 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.851088 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.851571 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.851812 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35273
	I1225 12:17:32.852290 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.852779 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.852810 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.853197 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.853436 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.853451 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I1225 12:17:32.853911 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.854623 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.854643 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.854983 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.855331 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.855578 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.858018 1450194 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1225 12:17:32.856006 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I1225 12:17:32.856252 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I1225 12:17:32.857293 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.859989 1450194 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1225 12:17:32.860017 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1225 12:17:32.860041 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.861524 1450194 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1225 12:17:32.860285 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.860357 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.864469 1450194 out.go:177]   - Using image docker.io/busybox:stable
	I1225 12:17:32.865920 1450194 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1225 12:17:32.865940 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1225 12:17:32.863568 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.865963 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.865979 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.863710 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.863845 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.863977 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34233
	I1225 12:17:32.864304 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.866032 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.866036 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.866061 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.866194 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.866335 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.866503 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.866503 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.866513 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.866558 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:32.866804 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.867046 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.867498 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:32.867521 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:32.867889 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:32.868783 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:32.869071 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.870749 1450194 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1225 12:17:32.869388 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.870364 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.870649 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:32.871084 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.872623 1450194 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1225 12:17:32.872633 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1225 12:17:32.872645 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.872701 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.872727 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.872810 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.874510 1450194 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I1225 12:17:32.873242 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.875532 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.876140 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.876143 1450194 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1225 12:17:32.876157 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1225 12:17:32.876161 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.876168 1450194 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 12:17:32.877556 1450194 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 12:17:32.877570 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 12:17:32.877584 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.876174 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:32.876196 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.876367 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.877837 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.877978 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.878089 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.881188 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.881746 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.882109 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.882171 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.882344 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:32.882368 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:32.882492 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.882564 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:32.882642 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.882690 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:32.882720 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.882782 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:32.882828 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:32.883190 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:33.176313 1450194 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1225 12:17:33.176338 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1225 12:17:33.212289 1450194 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 12:17:33.273742 1450194 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-294911" context rescaled to 1 replicas
	I1225 12:17:33.273794 1450194 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 12:17:33.277647 1450194 out.go:177] * Verifying Kubernetes components...
	I1225 12:17:33.279319 1450194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:17:33.354721 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1225 12:17:33.406151 1450194 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1225 12:17:33.406185 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1225 12:17:33.412987 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 12:17:33.418266 1450194 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1225 12:17:33.418284 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1225 12:17:33.436541 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 12:17:33.438653 1450194 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1225 12:17:33.438680 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1225 12:17:33.439079 1450194 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1225 12:17:33.439103 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1225 12:17:33.446881 1450194 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1225 12:17:33.446908 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1225 12:17:33.452334 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1225 12:17:33.455520 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1225 12:17:33.470632 1450194 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I1225 12:17:33.470656 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1225 12:17:33.472551 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1225 12:17:33.473884 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1225 12:17:33.475182 1450194 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 12:17:33.475198 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1225 12:17:33.520379 1450194 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1225 12:17:33.520414 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1225 12:17:33.599990 1450194 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1225 12:17:33.600019 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1225 12:17:33.621312 1450194 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1225 12:17:33.621342 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1225 12:17:33.635317 1450194 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1225 12:17:33.635350 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1225 12:17:33.716011 1450194 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I1225 12:17:33.716043 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1225 12:17:33.719414 1450194 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 12:17:33.719446 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 12:17:33.736928 1450194 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1225 12:17:33.736962 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1225 12:17:33.737074 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1225 12:17:33.804973 1450194 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1225 12:17:33.805003 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1225 12:17:33.821857 1450194 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1225 12:17:33.821881 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1225 12:17:33.840152 1450194 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1225 12:17:33.840177 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1225 12:17:33.953819 1450194 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 12:17:33.953853 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 12:17:33.960135 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1225 12:17:33.961176 1450194 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1225 12:17:33.961199 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1225 12:17:33.990455 1450194 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1225 12:17:33.990482 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1225 12:17:34.022002 1450194 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1225 12:17:34.022028 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1225 12:17:34.038146 1450194 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1225 12:17:34.038172 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1225 12:17:34.129281 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1225 12:17:34.140164 1450194 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1225 12:17:34.140193 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1225 12:17:34.154548 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 12:17:34.178384 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1225 12:17:34.184382 1450194 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1225 12:17:34.184423 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1225 12:17:34.248744 1450194 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1225 12:17:34.248772 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1225 12:17:34.297989 1450194 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1225 12:17:34.298018 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1225 12:17:34.360455 1450194 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I1225 12:17:34.360489 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1225 12:17:34.404536 1450194 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1225 12:17:34.404566 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1225 12:17:34.455605 1450194 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1225 12:17:34.455634 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1225 12:17:34.502701 1450194 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1225 12:17:34.502729 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1225 12:17:34.544531 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1225 12:17:34.570476 1450194 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1225 12:17:34.570514 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1225 12:17:34.640253 1450194 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1225 12:17:34.640280 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1225 12:17:34.691678 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1225 12:17:36.447074 1450194 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.234733595s)
	I1225 12:17:36.447120 1450194 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1225 12:17:36.447209 1450194 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.167847366s)
	I1225 12:17:36.467703 1450194 node_ready.go:35] waiting up to 6m0s for node "addons-294911" to be "Ready" ...
	I1225 12:17:37.047259 1450194 node_ready.go:49] node "addons-294911" has status "Ready":"True"
	I1225 12:17:37.047304 1450194 node_ready.go:38] duration metric: took 579.546501ms waiting for node "addons-294911" to be "Ready" ...
	I1225 12:17:37.047320 1450194 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:17:37.696814 1450194 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gbl8g" in "kube-system" namespace to be "Ready" ...
	I1225 12:17:39.591825 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.237062585s)
	I1225 12:17:39.591883 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:39.591894 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:39.592298 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:39.592321 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:39.592332 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:39.592341 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:39.592298 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:39.592605 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:39.592624 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:39.592643 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:39.711163 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-gbl8g" in "kube-system" namespace has status "Ready":"False"
	I1225 12:17:39.968511 1450194 pod_ready.go:97] pod "coredns-5dd5756b68-gbl8g" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-25 12:17:32 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-25 12:17:32 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-25 12:17:32 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-25 12:17:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.148 HostIPs:[] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2023-12-25 12:17:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&Container
StateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-12-25 12:17:37 +0000 UTC,FinishedAt:2023-12-25 12:17:38 +0000 UTC,ContainerID:cri-o://540d7667323700d11bb7d2b98b00ffcb4c3d07ec1b494e2b10592cdb2c65c0c7,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://540d7667323700d11bb7d2b98b00ffcb4c3d07ec1b494e2b10592cdb2c65c0c7 Started:0xc0025d384c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1225 12:17:39.968544 1450194 pod_ready.go:81] duration metric: took 2.27163411s waiting for pod "coredns-5dd5756b68-gbl8g" in "kube-system" namespace to be "Ready" ...
	E1225 12:17:39.968556 1450194 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-gbl8g" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-25 12:17:32 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-25 12:17:32 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-25 12:17:32 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-25 12:17:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.148 HostIPs:[] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2023-12-25 12:17:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runn
ing:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-12-25 12:17:37 +0000 UTC,FinishedAt:2023-12-25 12:17:38 +0000 UTC,ContainerID:cri-o://540d7667323700d11bb7d2b98b00ffcb4c3d07ec1b494e2b10592cdb2c65c0c7,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://540d7667323700d11bb7d2b98b00ffcb4c3d07ec1b494e2b10592cdb2c65c0c7 Started:0xc0025d384c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1225 12:17:39.968563 1450194 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace to be "Ready" ...
	I1225 12:17:40.629574 1450194 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1225 12:17:40.629631 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:40.633338 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:40.634004 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:40.634058 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:40.634221 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:40.634516 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:40.634696 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:40.634880 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:40.799685 1450194 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1225 12:17:40.840826 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.427792876s)
	I1225 12:17:40.840887 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.40430982s)
	I1225 12:17:40.840891 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:40.840953 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.388580662s)
	I1225 12:17:40.840960 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:40.841003 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:40.841017 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:40.840936 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:40.841098 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:40.841339 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:40.841359 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:40.841370 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:40.841380 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:40.841429 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:40.841440 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:40.841451 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:40.841452 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:40.841460 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:40.841476 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:40.841484 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:40.841497 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:40.841505 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:40.841682 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:40.841757 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:40.841802 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:40.841821 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:40.841800 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:40.841878 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:40.842152 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:40.842204 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:40.841774 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:40.846084 1450194 addons.go:237] Setting addon gcp-auth=true in "addons-294911"
	I1225 12:17:40.846142 1450194 host.go:66] Checking if "addons-294911" exists ...
	I1225 12:17:40.846564 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:40.846602 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:40.862192 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41321
	I1225 12:17:40.862711 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:40.863260 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:40.863286 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:40.863670 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:40.864194 1450194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:17:40.864224 1450194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:17:40.880760 1450194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I1225 12:17:40.881251 1450194 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:17:40.881739 1450194 main.go:141] libmachine: Using API Version  1
	I1225 12:17:40.881765 1450194 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:17:40.882176 1450194 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:17:40.882374 1450194 main.go:141] libmachine: (addons-294911) Calling .GetState
	I1225 12:17:40.884289 1450194 main.go:141] libmachine: (addons-294911) Calling .DriverName
	I1225 12:17:40.884561 1450194 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1225 12:17:40.884585 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHHostname
	I1225 12:17:40.887817 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:40.888241 1450194 main.go:141] libmachine: (addons-294911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:01:f9", ip: ""} in network mk-addons-294911: {Iface:virbr1 ExpiryTime:2023-12-25 13:16:49 +0000 UTC Type:0 Mac:52:54:00:a6:01:f9 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:addons-294911 Clientid:01:52:54:00:a6:01:f9}
	I1225 12:17:40.888274 1450194 main.go:141] libmachine: (addons-294911) DBG | domain addons-294911 has defined IP address 192.168.39.148 and MAC address 52:54:00:a6:01:f9 in network mk-addons-294911
	I1225 12:17:40.888418 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHPort
	I1225 12:17:40.888631 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHKeyPath
	I1225 12:17:40.888836 1450194 main.go:141] libmachine: (addons-294911) Calling .GetSSHUsername
	I1225 12:17:40.889000 1450194 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/addons-294911/id_rsa Username:docker}
	I1225 12:17:41.153067 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.69749506s)
	I1225 12:17:41.153082 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.68049529s)
	I1225 12:17:41.153120 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:41.153135 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:41.153192 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:41.153231 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:41.153613 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:41.153646 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:41.153679 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:41.153701 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:41.153715 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:41.153729 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:41.153726 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:41.153752 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:41.153770 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:41.153782 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:41.153992 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:41.154009 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:41.154137 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:41.154159 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:41.211265 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:41.211301 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:41.211631 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:41.211686 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:41.211711 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:41.282064 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:41.282128 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:41.282680 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:41.282737 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:41.282746 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:42.041120 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:17:43.019633 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.545700513s)
	I1225 12:17:43.019638 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.282516307s)
	I1225 12:17:43.019701 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.059502496s)
	I1225 12:17:43.019737 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.019700 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.019758 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.019767 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.019810 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.890496146s)
	I1225 12:17:43.019739 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.019835 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	W1225 12:17:43.019857 1450194 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1225 12:17:43.019888 1450194 retry.go:31] will retry after 209.232239ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1225 12:17:43.019894 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.865316002s)
	I1225 12:17:43.019920 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.019932 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.019953 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.841526645s)
	I1225 12:17:43.019974 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.020010 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.020295 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.475495538s)
	I1225 12:17:43.020332 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.020346 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.022528 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:43.022530 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.022550 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.022554 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.022562 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.022570 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.022582 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.022590 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.022604 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:43.022603 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.022628 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.022631 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.022639 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.022649 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.022659 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.022665 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:43.022683 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:43.022640 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.022705 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.022713 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.022722 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.022570 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.022734 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.022749 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.022720 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.022759 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.022771 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.022739 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.022833 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.022837 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:43.022842 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.023177 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:43.023178 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:43.023191 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.023204 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.023213 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.023221 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.023222 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:43.023229 1450194 addons.go:473] Verifying addon metrics-server=true in "addons-294911"
	I1225 12:17:43.023247 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.023254 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.023264 1450194 addons.go:473] Verifying addon registry=true in "addons-294911"
	I1225 12:17:43.023289 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:43.023327 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.023356 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.024860 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.024884 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:43.026655 1450194 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-294911 service yakd-dashboard -n yakd-dashboard
	
	
	I1225 12:17:43.026684 1450194 out.go:177] * Verifying registry addon...
	I1225 12:17:43.026714 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.028359 1450194 addons.go:473] Verifying addon ingress=true in "addons-294911"
	I1225 12:17:43.029767 1450194 out.go:177] * Verifying ingress addon...
	I1225 12:17:43.029047 1450194 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1225 12:17:43.032107 1450194 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1225 12:17:43.082859 1450194 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1225 12:17:43.082894 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:43.096672 1450194 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1225 12:17:43.096698 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:43.229926 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1225 12:17:43.590928 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:43.606783 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:43.754405 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.062649293s)
	I1225 12:17:43.754492 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.754514 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.754497 1450194 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.869909547s)
	I1225 12:17:43.756527 1450194 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1225 12:17:43.754875 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:43.754916 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.757995 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.758013 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:43.758033 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:43.759275 1450194 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1225 12:17:43.760597 1450194 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1225 12:17:43.760617 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1225 12:17:43.758318 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:43.760652 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:43.760672 1450194 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-294911"
	I1225 12:17:43.758346 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:43.762267 1450194 out.go:177] * Verifying csi-hostpath-driver addon...
	I1225 12:17:43.764173 1450194 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1225 12:17:43.814219 1450194 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1225 12:17:43.814250 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:43.828168 1450194 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1225 12:17:43.828194 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1225 12:17:44.013045 1450194 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1225 12:17:44.013072 1450194 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1225 12:17:44.073206 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:44.073215 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:44.148338 1450194 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1225 12:17:44.404873 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:44.489462 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:17:44.539639 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:44.541395 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:44.773610 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:45.080007 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:45.082724 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:45.286589 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:45.536389 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:45.541991 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:45.774376 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:46.011200 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.781203249s)
	I1225 12:17:46.011279 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:46.011304 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:46.011750 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:46.011760 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:46.011777 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:46.011797 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:46.011811 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:46.012105 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:46.012123 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:46.012122 1450194 main.go:141] libmachine: (addons-294911) DBG | Closing plugin on server side
	I1225 12:17:46.058318 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:46.068418 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:46.300563 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:46.384621 1450194 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.236222541s)
	I1225 12:17:46.384684 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:46.384707 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:46.385052 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:46.385115 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:46.385136 1450194 main.go:141] libmachine: Making call to close driver server
	I1225 12:17:46.385148 1450194 main.go:141] libmachine: (addons-294911) Calling .Close
	I1225 12:17:46.385474 1450194 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:17:46.385495 1450194 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:17:46.386855 1450194 addons.go:473] Verifying addon gcp-auth=true in "addons-294911"
	I1225 12:17:46.389098 1450194 out.go:177] * Verifying gcp-auth addon...
	I1225 12:17:46.391560 1450194 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1225 12:17:46.402533 1450194 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1225 12:17:46.402565 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:46.568212 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:46.585245 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:46.803335 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:46.902199 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:46.997063 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:17:47.046021 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:47.053964 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:47.281530 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:47.395592 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:47.535781 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:47.537653 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:47.772828 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:47.896253 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:48.040167 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:48.041538 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:48.274541 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:48.396579 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:48.544906 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:48.546892 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:48.784293 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:48.897739 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:49.040594 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:49.051068 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:49.270511 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:49.406635 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:49.492578 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:17:49.545048 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:49.545578 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:49.785523 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:49.900374 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:50.047459 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:50.051010 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:50.275433 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:50.398376 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:50.540020 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:50.543204 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:50.774865 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:50.895750 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:51.041379 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:51.041415 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:51.272711 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:51.397299 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:51.540343 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:51.545595 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:51.770843 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:51.899307 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:51.976067 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:17:52.036046 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:52.037587 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:52.273002 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:52.396020 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:52.595056 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:52.595743 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:52.780169 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:53.131269 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:53.131310 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:53.131803 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:53.271420 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:53.403558 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:53.542042 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:53.547874 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:53.782315 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:53.896081 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:53.976331 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:17:54.037770 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:54.041045 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:54.270686 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:54.400169 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:54.539650 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:54.548690 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:54.783171 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:54.903529 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:55.049675 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:55.056754 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:55.271494 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:55.396369 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:55.539553 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:55.541566 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:55.852200 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:55.896799 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:55.977784 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:17:56.038533 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:56.040642 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:56.271353 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:56.397423 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:56.536850 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:56.540442 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:56.770864 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:56.900472 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:57.036984 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:57.039376 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:57.270675 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:57.409261 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:57.541059 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:57.541403 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:57.770421 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:57.897937 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:57.978544 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:17:58.037890 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:58.040254 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:58.274261 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:58.396215 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:58.536242 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:58.547246 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:58.772053 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:59.157876 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:59.177444 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:59.178106 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:59.276762 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:59.397976 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:17:59.540814 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:17:59.543003 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:17:59.770427 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:17:59.901449 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:00.036488 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:00.037917 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:00.270811 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:00.396169 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:00.476690 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:18:00.536858 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:00.539132 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:00.771152 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:00.916317 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:01.035468 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:01.038212 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:01.271325 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:01.397390 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:01.536105 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:01.538079 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:01.779555 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:01.899489 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:02.036681 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:02.038928 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:02.271568 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:02.395892 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:02.543764 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:02.545687 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:02.770989 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:02.897507 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:02.975927 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:18:03.035641 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:03.039207 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:03.271190 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:03.396416 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:03.535649 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:03.538836 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:03.769916 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:03.896361 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:04.035210 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:04.038812 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:04.273456 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:04.396220 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:04.534460 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:04.537611 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:04.769709 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:04.896088 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:05.036662 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:05.038754 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:05.269807 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:05.398818 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:05.476875 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:18:05.537380 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:05.539647 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:05.769837 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:05.895642 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:06.035316 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:06.038938 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:06.269777 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:06.397079 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:06.540796 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:06.544049 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:06.770401 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:06.897025 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:07.037454 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:07.039279 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:07.273055 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:07.396184 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:07.536223 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:07.538858 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:07.771669 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:07.896270 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:07.976529 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:18:08.039354 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:08.041896 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:08.272739 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:08.396233 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:08.537138 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:08.539102 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:08.782830 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:08.900034 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:09.036990 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:09.037123 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:09.295955 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:09.396315 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:09.535987 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:09.538647 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:09.801831 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:09.915571 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:09.989827 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:18:10.036928 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:10.039611 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:10.270357 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:10.395747 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:10.536436 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:10.538684 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:10.772737 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:10.900854 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:11.036326 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:11.037778 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:11.270893 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:11.396770 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:11.537394 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:11.537540 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:11.771903 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:11.895684 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:12.036463 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:12.041936 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:12.271421 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:12.397041 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:12.476202 1450194 pod_ready.go:102] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"False"
	I1225 12:18:12.536222 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:12.539404 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:12.771234 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:12.895052 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:12.977614 1450194 pod_ready.go:92] pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace has status "Ready":"True"
	I1225 12:18:12.977646 1450194 pod_ready.go:81] duration metric: took 33.009074554s waiting for pod "coredns-5dd5756b68-zq2p5" in "kube-system" namespace to be "Ready" ...
	I1225 12:18:12.977661 1450194 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-294911" in "kube-system" namespace to be "Ready" ...
	I1225 12:18:12.984924 1450194 pod_ready.go:92] pod "etcd-addons-294911" in "kube-system" namespace has status "Ready":"True"
	I1225 12:18:12.984956 1450194 pod_ready.go:81] duration metric: took 7.286539ms waiting for pod "etcd-addons-294911" in "kube-system" namespace to be "Ready" ...
	I1225 12:18:12.984967 1450194 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-294911" in "kube-system" namespace to be "Ready" ...
	I1225 12:18:12.991211 1450194 pod_ready.go:92] pod "kube-apiserver-addons-294911" in "kube-system" namespace has status "Ready":"True"
	I1225 12:18:12.991240 1450194 pod_ready.go:81] duration metric: took 6.266754ms waiting for pod "kube-apiserver-addons-294911" in "kube-system" namespace to be "Ready" ...
	I1225 12:18:12.991255 1450194 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-294911" in "kube-system" namespace to be "Ready" ...
	I1225 12:18:12.997638 1450194 pod_ready.go:92] pod "kube-controller-manager-addons-294911" in "kube-system" namespace has status "Ready":"True"
	I1225 12:18:12.997662 1450194 pod_ready.go:81] duration metric: took 6.396648ms waiting for pod "kube-controller-manager-addons-294911" in "kube-system" namespace to be "Ready" ...
	I1225 12:18:12.997678 1450194 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d9h2" in "kube-system" namespace to be "Ready" ...
	I1225 12:18:13.004684 1450194 pod_ready.go:92] pod "kube-proxy-4d9h2" in "kube-system" namespace has status "Ready":"True"
	I1225 12:18:13.004709 1450194 pod_ready.go:81] duration metric: took 7.019177ms waiting for pod "kube-proxy-4d9h2" in "kube-system" namespace to be "Ready" ...
	I1225 12:18:13.004720 1450194 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-294911" in "kube-system" namespace to be "Ready" ...
	I1225 12:18:13.047304 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:13.050255 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:13.271093 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:13.373526 1450194 pod_ready.go:92] pod "kube-scheduler-addons-294911" in "kube-system" namespace has status "Ready":"True"
	I1225 12:18:13.373564 1450194 pod_ready.go:81] duration metric: took 368.834004ms waiting for pod "kube-scheduler-addons-294911" in "kube-system" namespace to be "Ready" ...
	I1225 12:18:13.373577 1450194 pod_ready.go:38] duration metric: took 36.326241832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:18:13.373602 1450194 api_server.go:52] waiting for apiserver process to appear ...
	I1225 12:18:13.373682 1450194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 12:18:13.395742 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:13.417337 1450194 api_server.go:72] duration metric: took 40.143448774s to wait for apiserver process to appear ...
	I1225 12:18:13.417375 1450194 api_server.go:88] waiting for apiserver healthz status ...
	I1225 12:18:13.417404 1450194 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I1225 12:18:13.424156 1450194 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I1225 12:18:13.426212 1450194 api_server.go:141] control plane version: v1.28.4
	I1225 12:18:13.426241 1450194 api_server.go:131] duration metric: took 8.857662ms to wait for apiserver health ...
	I1225 12:18:13.426249 1450194 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 12:18:13.860493 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:13.860883 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:13.868948 1450194 system_pods.go:59] 18 kube-system pods found
	I1225 12:18:13.869029 1450194 system_pods.go:61] "coredns-5dd5756b68-zq2p5" [7a13d8b3-0b95-4925-8ac4-b9a6cde3cad2] Running
	I1225 12:18:13.869041 1450194 system_pods.go:61] "csi-hostpath-attacher-0" [01225fc0-02b3-487e-8310-0bbb25c706ef] Running
	I1225 12:18:13.869048 1450194 system_pods.go:61] "csi-hostpath-resizer-0" [40ea05bd-4291-4932-87dd-874475328c4f] Running
	I1225 12:18:13.869060 1450194 system_pods.go:61] "csi-hostpathplugin-gb726" [cb471e2e-c800-4c0a-b52a-8b4f9e64737b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1225 12:18:13.869070 1450194 system_pods.go:61] "etcd-addons-294911" [3094b9a7-0410-4a4d-9e85-59cc2151a2c7] Running
	I1225 12:18:13.869084 1450194 system_pods.go:61] "kube-apiserver-addons-294911" [771d232c-64b2-4841-826f-67d02ef29aec] Running
	I1225 12:18:13.869092 1450194 system_pods.go:61] "kube-controller-manager-addons-294911" [7b68766e-f61b-4f79-8927-e05fe399f609] Running
	I1225 12:18:13.869103 1450194 system_pods.go:61] "kube-ingress-dns-minikube" [7ba3b010-cc56-4e35-8bef-1bb4ef70e8f3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1225 12:18:13.869124 1450194 system_pods.go:61] "kube-proxy-4d9h2" [8c4a266a-840c-4ea2-86a0-a15bf426f8ac] Running
	I1225 12:18:13.869131 1450194 system_pods.go:61] "kube-scheduler-addons-294911" [41ba4d9c-fcce-4587-af23-ac8b8e07e2ed] Running
	I1225 12:18:13.869143 1450194 system_pods.go:61] "metrics-server-7c66d45ddc-6dhqs" [8cfe97c5-d071-4349-bf5d-d30177e71d22] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 12:18:13.869152 1450194 system_pods.go:61] "nvidia-device-plugin-daemonset-6ssjm" [10e8dcb3-74eb-4487-bdf0-a6f69d444a40] Running
	I1225 12:18:13.869164 1450194 system_pods.go:61] "registry-4qz4b" [7610bd4f-9226-4f2c-8284-ec69f5f1c21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1225 12:18:13.869179 1450194 system_pods.go:61] "registry-proxy-dpb6q" [11af4342-d52d-4596-bd37-0c9cefafb061] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1225 12:18:13.869196 1450194 system_pods.go:61] "snapshot-controller-58dbcc7b99-64jpz" [1972a521-8bc9-4791-aa3e-62a2da0c04e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 12:18:13.869211 1450194 system_pods.go:61] "snapshot-controller-58dbcc7b99-tfbhc" [6bcddd53-09ea-49c4-8cf8-dea756c13bcb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 12:18:13.869222 1450194 system_pods.go:61] "storage-provisioner" [fecc0bf3-4efa-47d2-a9d7-cd32744f43a1] Running
	I1225 12:18:13.869233 1450194 system_pods.go:61] "tiller-deploy-7b677967b9-p7zsn" [03ef0447-4ed5-4a60-808d-639937566c1d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1225 12:18:13.869243 1450194 system_pods.go:74] duration metric: took 442.986335ms to wait for pod list to return data ...
	I1225 12:18:13.869257 1450194 default_sa.go:34] waiting for default service account to be created ...
	I1225 12:18:13.884518 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:13.886657 1450194 default_sa.go:45] found service account: "default"
	I1225 12:18:13.886679 1450194 default_sa.go:55] duration metric: took 17.411712ms for default service account to be created ...
	I1225 12:18:13.886691 1450194 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 12:18:13.899951 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:13.982265 1450194 system_pods.go:86] 18 kube-system pods found
	I1225 12:18:13.982298 1450194 system_pods.go:89] "coredns-5dd5756b68-zq2p5" [7a13d8b3-0b95-4925-8ac4-b9a6cde3cad2] Running
	I1225 12:18:13.982304 1450194 system_pods.go:89] "csi-hostpath-attacher-0" [01225fc0-02b3-487e-8310-0bbb25c706ef] Running
	I1225 12:18:13.982308 1450194 system_pods.go:89] "csi-hostpath-resizer-0" [40ea05bd-4291-4932-87dd-874475328c4f] Running
	I1225 12:18:13.982316 1450194 system_pods.go:89] "csi-hostpathplugin-gb726" [cb471e2e-c800-4c0a-b52a-8b4f9e64737b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1225 12:18:13.982321 1450194 system_pods.go:89] "etcd-addons-294911" [3094b9a7-0410-4a4d-9e85-59cc2151a2c7] Running
	I1225 12:18:13.982327 1450194 system_pods.go:89] "kube-apiserver-addons-294911" [771d232c-64b2-4841-826f-67d02ef29aec] Running
	I1225 12:18:13.982332 1450194 system_pods.go:89] "kube-controller-manager-addons-294911" [7b68766e-f61b-4f79-8927-e05fe399f609] Running
	I1225 12:18:13.982341 1450194 system_pods.go:89] "kube-ingress-dns-minikube" [7ba3b010-cc56-4e35-8bef-1bb4ef70e8f3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1225 12:18:13.982347 1450194 system_pods.go:89] "kube-proxy-4d9h2" [8c4a266a-840c-4ea2-86a0-a15bf426f8ac] Running
	I1225 12:18:13.982355 1450194 system_pods.go:89] "kube-scheduler-addons-294911" [41ba4d9c-fcce-4587-af23-ac8b8e07e2ed] Running
	I1225 12:18:13.982364 1450194 system_pods.go:89] "metrics-server-7c66d45ddc-6dhqs" [8cfe97c5-d071-4349-bf5d-d30177e71d22] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 12:18:13.982375 1450194 system_pods.go:89] "nvidia-device-plugin-daemonset-6ssjm" [10e8dcb3-74eb-4487-bdf0-a6f69d444a40] Running
	I1225 12:18:13.982389 1450194 system_pods.go:89] "registry-4qz4b" [7610bd4f-9226-4f2c-8284-ec69f5f1c21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1225 12:18:13.982400 1450194 system_pods.go:89] "registry-proxy-dpb6q" [11af4342-d52d-4596-bd37-0c9cefafb061] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1225 12:18:13.982409 1450194 system_pods.go:89] "snapshot-controller-58dbcc7b99-64jpz" [1972a521-8bc9-4791-aa3e-62a2da0c04e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 12:18:13.982418 1450194 system_pods.go:89] "snapshot-controller-58dbcc7b99-tfbhc" [6bcddd53-09ea-49c4-8cf8-dea756c13bcb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 12:18:13.982425 1450194 system_pods.go:89] "storage-provisioner" [fecc0bf3-4efa-47d2-a9d7-cd32744f43a1] Running
	I1225 12:18:13.982431 1450194 system_pods.go:89] "tiller-deploy-7b677967b9-p7zsn" [03ef0447-4ed5-4a60-808d-639937566c1d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1225 12:18:13.982459 1450194 system_pods.go:126] duration metric: took 95.760228ms to wait for k8s-apps to be running ...
	I1225 12:18:13.982474 1450194 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 12:18:13.982521 1450194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:18:14.011690 1450194 system_svc.go:56] duration metric: took 29.204169ms WaitForService to wait for kubelet.
	I1225 12:18:14.011732 1450194 kubeadm.go:581] duration metric: took 40.737852416s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 12:18:14.011763 1450194 node_conditions.go:102] verifying NodePressure condition ...
	I1225 12:18:14.036372 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:14.039205 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:14.174534 1450194 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:18:14.174579 1450194 node_conditions.go:123] node cpu capacity is 2
	I1225 12:18:14.174596 1450194 node_conditions.go:105] duration metric: took 162.827323ms to run NodePressure ...
	I1225 12:18:14.174612 1450194 start.go:228] waiting for startup goroutines ...
	I1225 12:18:14.271595 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:14.397765 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:14.536566 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:14.537061 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:14.771119 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:14.896363 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:15.035608 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:15.037680 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:15.271402 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:15.397744 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:15.538750 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:15.539447 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:15.770654 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:15.900239 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:16.036314 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:16.039943 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:16.272912 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:16.402575 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:16.537562 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:16.538261 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:16.773732 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:16.896013 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:17.035801 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:17.038770 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:17.275703 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:17.403279 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:17.535662 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:17.536799 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:17.774837 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:17.895421 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:18.036657 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:18.038659 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:18.270963 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:18.398714 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:18.537515 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:18.538374 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:18.775602 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:18.896210 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:19.036410 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:19.040565 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:19.270845 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:19.403981 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:19.537348 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:19.540727 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:19.774488 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:19.896836 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:20.035025 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:20.037857 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:20.275494 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:20.396163 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:20.536469 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:20.538430 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:20.793690 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:20.901495 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:21.047753 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:21.051140 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:21.271460 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:21.396472 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:21.535647 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:21.538904 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:21.770383 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:21.895458 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:22.036848 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:22.038276 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:22.271201 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:22.395436 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:22.535028 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:22.539278 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:22.771230 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:22.897954 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:23.034948 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:23.038712 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:23.270245 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:23.399218 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:23.535756 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:23.538169 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:23.771747 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:23.896116 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:24.041257 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:24.041938 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:24.270738 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:24.395089 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:24.535373 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:24.538866 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:24.770390 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:24.895763 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:25.036059 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:25.040417 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:25.271482 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:25.396397 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:25.540292 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:25.540839 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:25.780485 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:25.895779 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:26.036890 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:26.039796 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:26.270669 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:26.395829 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:26.542990 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:26.547408 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:26.771424 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:26.896431 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:27.038334 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:27.043091 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:27.276760 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:27.399103 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:27.537824 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:27.539650 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:27.770050 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:27.897370 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:28.045017 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:28.046045 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:28.275035 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:28.396676 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:28.537589 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:28.539149 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:28.772199 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:28.896863 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:29.036399 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:29.038539 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:29.271103 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:29.396121 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:29.534906 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:29.538787 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:29.771245 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:29.902856 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:30.036867 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:30.041418 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:30.273830 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:30.397294 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:30.535690 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:30.539245 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:30.772742 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:30.895830 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:31.037694 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:31.039075 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:31.270203 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:31.396826 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:31.536312 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:31.537877 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:31.771382 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:31.896357 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:32.035652 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:32.043599 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:32.271202 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:32.396315 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:32.537659 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:32.538653 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:32.772658 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:32.895920 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:33.036715 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:33.038667 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:33.271210 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:33.401303 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:33.639589 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:33.642700 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:33.771468 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:33.898763 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:34.037818 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:34.038909 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:34.270820 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:34.396274 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:34.535432 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:34.540702 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:34.770470 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:34.896202 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:35.036644 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:35.038449 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:35.273670 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:35.396105 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:35.535268 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:35.537866 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:35.769874 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:35.896502 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:36.036252 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 12:18:36.040895 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:36.281218 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:36.398948 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:36.536269 1450194 kapi.go:107] duration metric: took 53.50721669s to wait for kubernetes.io/minikube-addons=registry ...
	I1225 12:18:36.538045 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:36.771046 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:36.897852 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:37.038275 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:37.271261 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:37.396553 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:37.537868 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:37.771818 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:37.895954 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:38.037518 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:38.271390 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:38.396703 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:38.537307 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:38.778810 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:38.898600 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:39.036813 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:39.270180 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:39.396523 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:39.538295 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:39.770526 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:39.896340 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:40.038026 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:40.271119 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:40.402216 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:40.540524 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:40.771956 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:40.896148 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:41.038070 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:41.270647 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:41.396839 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:41.537043 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:41.773219 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:41.901165 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:42.040029 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:42.290974 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:42.397401 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:42.544219 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:42.770290 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:42.896971 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:43.038803 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:43.270639 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:43.395971 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:43.539431 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:43.787809 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:44.039287 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:44.042738 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:44.270239 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:44.395829 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:44.537745 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:44.777272 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:44.897559 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:45.048002 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:45.271881 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:45.395380 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:45.536569 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:45.771449 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:45.897292 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:46.042670 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:46.278343 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:46.402850 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:46.541056 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:46.770324 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:46.897917 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:47.038102 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:47.270708 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:47.397929 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:47.537541 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:47.771918 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:47.902011 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:48.043369 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:48.272274 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:48.395225 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:48.557605 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:48.771642 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:48.897691 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:49.039457 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:49.276787 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:49.396635 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:49.569479 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:49.777639 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:49.896690 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:50.037912 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:50.272993 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:50.396821 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:50.541211 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:50.777777 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:50.898946 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:51.042120 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:51.271879 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:51.396045 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:51.542752 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:51.777980 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:51.897055 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:52.040348 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:52.273772 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:52.419285 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:52.540774 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:52.773591 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:52.895956 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:53.043971 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:53.272122 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:53.405980 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:53.583561 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:53.771083 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:53.896395 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:54.044024 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:54.270596 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:54.395533 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:54.537506 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:54.770959 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:54.895812 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:55.037416 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:55.270648 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:55.396087 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:55.538067 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:55.770855 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:55.895999 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:56.037474 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:56.271761 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:56.395918 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:56.537671 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:56.771816 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:56.896045 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:57.041934 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:57.271588 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:57.408302 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:57.925171 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:57.929844 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:57.938052 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:58.037798 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:58.271557 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:58.395659 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:58.537325 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:58.772891 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:58.896261 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:59.041839 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:59.270412 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:59.398057 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:18:59.537173 1450194 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 12:18:59.770735 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:18:59.895414 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:19:00.040531 1450194 kapi.go:107] duration metric: took 1m17.008421264s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1225 12:19:00.270801 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:00.397081 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:19:00.785587 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:00.908105 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:19:01.271708 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:01.397081 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:19:01.771672 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:01.899918 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:19:02.274832 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:02.400733 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:19:02.771461 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:02.896257 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 12:19:03.271581 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:03.396632 1450194 kapi.go:107] duration metric: took 1m17.005066999s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1225 12:19:03.398804 1450194 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-294911 cluster.
	I1225 12:19:03.400485 1450194 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1225 12:19:03.401961 1450194 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1225 12:19:04.064984 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:04.271953 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:04.772191 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:05.271642 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:05.773334 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:06.271468 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:06.771106 1450194 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 12:19:07.270180 1450194 kapi.go:107] duration metric: took 1m23.506001654s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1225 12:19:07.272328 1450194 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, helm-tiller, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1225 12:19:07.273753 1450194 addons.go:508] enable addons completed in 1m34.577541813s: enabled=[cloud-spanner storage-provisioner ingress-dns nvidia-device-plugin default-storageclass storage-provisioner-rancher helm-tiller inspektor-gadget metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1225 12:19:07.273812 1450194 start.go:233] waiting for cluster config update ...
	I1225 12:19:07.273839 1450194 start.go:242] writing updated cluster config ...
	I1225 12:19:07.274131 1450194 ssh_runner.go:195] Run: rm -f paused
	I1225 12:19:07.329558 1450194 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 12:19:07.331381 1450194 out.go:177] * Done! kubectl is now configured to use "addons-294911" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2023-12-25 12:16:45 UTC, ends at Mon 2023-12-25 12:21:56 UTC. --
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.019925691Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992\"" file="storage/storage_transport.go:185"
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.019970952Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c\"" file="storage/storage_transport.go:185"
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.020019936Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824\"" file="storage/storage_transport.go:185"
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.020065447Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9211bbaa0dbd68fed073065eb9f0a6ed00a75090a9235eca2554c62d1e75c58f\"" file="storage/storage_transport.go:185"
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.020111526Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a\"" file="storage/storage_transport.go:185"
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.020159133Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff\"" file="storage/storage_transport.go:185"
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.020199786Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9\"" file="storage/storage_transport.go:185"
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.020245069Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31\"" file="storage/storage_transport.go:185"
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.020284758Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@b2e369e632beab1e6e06849bb17e6d64fbddee89ae73b8af4de476fede474575\"" file="storage/storage_transport.go:185"
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.020328133Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79\"" file="storage/storage_transport.go:185"
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.020795434Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.4],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499 registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb],Size_:127226832,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,RepoTags:[registry.k8s.io/kube-controller-manager:v1.28.4],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232],Size_:123261750,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:e3db313c6dbc065d4ac3
b32c7a6f2a878949031b881d217b63881a109c5cfba1,RepoTags:[registry.k8s.io/kube-scheduler:v1.28.4],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32],Size_:61551410,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,RepoTags:[registry.k8s.io/kube-proxy:v1.28.4],RepoDigests:[registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532],Size_:74749335,Uid:nil,Username:,Spec:nil,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34
c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},&Image{Id:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,RepoTags:[registry.k8s.io/etcd:3.5.9-0],RepoDigests:[registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15 registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3],Size_:295456551,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.
io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},&Image{Id:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,RepoTags:[docker.io/kindest/kindnetd:v20230809-80a64d96],RepoDigests:[docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052 docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4],Size_:65258016,Uid:nil,Username:,Spec:nil,},&Image{Id:a608c686bac931a5955f10a01b606f289af2b6fd9250e7c4eadc4a8117002c57,RepoTags:[],RepoDigests:[registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21 registry.k8s.io/metrics-server/metrics-server@sha256:ee4304963fb035239bb5c5e8c10f2f38ee80efc16ecbdb9feb7213c17ae2e86e],Size_:70330870,Uid:&Int64Value{Val
ue:65534,},Username:,Spec:nil,},&Image{Id:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,RepoTags:[],RepoDigests:[docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310 docker.io/marcnuri/yakd@sha256:e65e169e9a45f0fa8c0bb25f979481f4ed561aab48df856cba042a75dd34b0a9],Size_:204075024,Uid:&Int64Value{Value:10001,},Username:,Spec:nil,},&Image{Id:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7],Size_:57899101,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0 registry.k8s.io/sig-storage/csi-
attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b],Size_:57303140,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:8cfc3f994a82b92969bf5521603a7f2815cc9a84857b3a888402e19a37423c4b,RepoTags:[],RepoDigests:[nvcr.io/nvidia/k8s-device-plugin@sha256:0153ba5eac2182064434f0101acce97ef512df59a32e1fbbdef12ca75c514e1e nvcr.io/nvidia/k8s-device-plugin@sha256:339be23400f58c04f09b6ba1d4d2e0e7120648f2b114880513685b22093311f1],Size_:303559878,Uid:nil,Username:,Spec:nil,},&Image{Id:d378d53ef198dac0270a2861e7752267d41db8b5bc6e33fb7376fd77122fa43c,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931],Size_:249356252,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-
external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c],Size_:56980232,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:909c3ff012b7f9fc4b802b73f250ad45e4ffa385299b71fdd6813f70a6711792,RepoTags:[],RepoDigests:[docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86 docker.io/library/registry@sha256:860f379a011eddfab604d9acfe3cf50b2d6e958026fb0f977132b0b083b1a3d7],Size_:25961051,Uid:nil,Username:,Spec:nil,},&Image{Id:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f],Size_:188129131,Uid:nil,Username:,Spec:nil,},&Image{Id:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,RepoTags:[],RepoDigests:[registry.k8s.
io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80],Size_:55070573,Uid:&Int64Value{Value:65532,},Username:,Spec:nil,},&Image{Id:d2fd211e7dcaaecc12a1c76088a88d83bd00bf716be19cef173392b68c5a3653,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5 gcr.io/k8s-minikube/kube-registry-proxy@sha256:f107ecd58728a2df5f2bb7e087f65f5363d0019b1e1fd476e4ef16065f44abfb],Size_:146566649,Uid:nil,Username:,Spec:nil,},&Image{Id:754854eab8c1c41bf733ba68c8bbae4cdc5806bd557d0c8c35f692d928489d75,RepoTags:[],RepoDigests:[gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49 gcr.io/cloud-spanner-emulator/emulator@sha256:7e0a9c24dddd7ef923530c1f490ed6382a4e3c9f49e7be7a3cec849bf1bfc30f],Size_:125497816,Uid:&Int64Value
{Value:0,},Username:,Spec:nil,},&Image{Id:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280],Size_:54632579,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,RepoTags:[],RepoDigests:[ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f],Size_:88649672,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,},&Image{Id:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,RepoTags:[],RepoDigests:[docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef docker.io/rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246],Siz
e_:35264960,Uid:nil,Username:,Spec:nil,},&Image{Id:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c],Size_:21521620,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5],Size_:37200280,Uid:nil,Username:,Spec:nil,},&Image{Id:5aa0bf4798fa2300b97564cc77480e6d0abac88f8bdc001c01eb4ab3b98b2fbf,RepoTags:[],RepoDigests:[registry.k8s.io/ingress-nginx/controller@sha256:0115d7e01987c13e1be90b09c223c3e0d8e9a92e97c04
21e712ad3577e2d78e5 registry.k8s.io/ingress-nginx/controller@sha256:5b161f051d017e55d358435f295f5e9a297e66158f136321d9b04520ec6c48a3],Size_:275577236,Uid:nil,Username:www-data,Spec:nil,},&Image{Id:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0],Size_:19577497,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:6d2a98b274382ca188ce121413dcafda936b250500089a622c3f2ce821ab9a69,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf],Size_:49800034,Uid:&Int64Value{Value:65532,},Username:,Spec:nil,},&Image{Id:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014
c2c76f9326992,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8],Size_:60675705,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5],Size_:57410185,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,RepoTags:[],RepoDigests:[docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79
],Size_:4497096,Uid:nil,Username:,Spec:nil,},&Image{Id:9211bbaa0dbd68fed073065eb9f0a6ed00a75090a9235eca2554c62d1e75c58f,RepoTags:[docker.io/library/busybox:stable],RepoDigests:[docker.io/library/busybox@sha256:ba76950ac9eaa407512c9d859cea48114eeff8a6f12ebaa5d32ce79d4a017dd8 docker.io/library/busybox@sha256:cca7bbfb3cd4dc1022f00cee78c51aa46ecc3141188f0dd520978a620697e7ad],Size_:4504102,Uid:nil,Username:,Spec:nil,},&Image{Id:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,RepoTags:[gcr.io/k8s-minikube/busybox:latest],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b],Size_:1462480,Uid:nil,Username:,Spec:nil,},&Image{Id:529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff,RepoTags:[docker.io/library/nginx:alpine],RepoDigests:[docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686 doc
ker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59],Size_:44405005,Uid:nil,Username:,Spec:nil,},&Image{Id:d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026 docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9],Size_:190867606,Uid:nil,Username:,Spec:nil,},&Image{Id:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,RepoTags:[docker.io/alpine/helm:2.16.3],RepoDigests:[docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f],Size_:47148757,Uid:nil,Username:,Spec:nil,},&Image{Id:b2e369e632beab1e6e06849bb17e6d64fbddee89ae73b8af4de476fede474575,RepoTags:[],RepoDigests:[ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1 ghcr.io/headlamp-k8s/headlamp@sha256:7a9587036bd2
9304f8f1387a7245556a3c479434670b2ca58e3624d44d2a68c9],Size_:223439042,Uid:nil,Username:headlamp,Spec:nil,},&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},},}" file="go-grpc-middleware/chain.go:25" id=dad214f5-7be0-4cfb-9e70-deb698831940 name=/runtime.v1.ImageService/ListImages
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.078811595Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ca13c393-f4e5-4c1e-988a-b04819cff02c name=/runtime.v1.RuntimeService/Version
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.078950941Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ca13c393-f4e5-4c1e-988a-b04819cff02c name=/runtime.v1.RuntimeService/Version
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.080256300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5eb65a2c-baed-4fec-88e0-95970cb66b84 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.081617109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703506916081600614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574232,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=5eb65a2c-baed-4fec-88e0-95970cb66b84 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.082495618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2f6a1bba-027d-4f3a-aeca-615541c17b02 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.082554160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2f6a1bba-027d-4f3a-aeca-615541c17b02 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.083432477Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6eb8b28699ac13fe13ffa59867ce203b9a61bb89d9036046504e4602fd5bfec,PodSandboxId:e88561e0b5d16c67a9d3af186f502197c2665773922a599a29fbdddff1103687,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1703506908009430940,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-wn85m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a443acea-84f5-405d-b502-3a42b7a13baa,},Annotations:map[string]string{io.kubernetes.container.hash: 811807ca,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349b448ef16ac230139f99e87e91a485d2e9cfaf6a74fb3a12803f2b0510c204,PodSandboxId:9fc311cd01f1f89b1932467c4a4f06e5165b64033de3b423ed399f80e36091a2,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1703506805337041420,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-8x7wm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 19fe76e9-2ad4-4e1f-9955-0c9f045d375a,},An
notations:map[string]string{io.kubernetes.container.hash: fec60fa,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c13bba5d9980486ca5398f2a49aab91c8ea454c0d4af728d7bf6097f47b66c1,PodSandboxId:6b32e496d124e7886b6a91e111002fd1cbdaaf9438a655726cc916e12ed9a4e1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1703506768452399782,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 6ba140c4-9acd-4f8f-a1b8-20213766cbf9,},Annotations:map[string]string{io.kubernetes.container.hash: 4d0090c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f538716f65b274641562ee1f20f5799ee7bc35d4b83f0b686c430102d99cf44e,PodSandboxId:d892d0d361d2cbae87cc2d8ae7cea42f7ff3419fcc63e7e6f7fa9971abba2d64,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1703506749313983932,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ing
ress-nginx-admission-patch-m9chn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0ba08ff7-607b-4ffa-8006-f82f1c3279d2,},Annotations:map[string]string{io.kubernetes.container.hash: 9fe6f123,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0ce0a24d7e28ce466a7b8261c289b6ec517c5c358b187c56ede5142e2abbc67,PodSandboxId:271d33a5062d35485e6932b071b2781307906f011a5b7c0921dfcde225841dba,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1703506742215488407,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-zxnrc,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 80c1e7a1-ba1b-42fa-a4e2-b2a77286fae8,},Annotations:map[string]string{io.kubernetes.container.hash: b0032121,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc2be081949e6c371066bd7174610ebc47eb2e22ed76b0fdb1230ad4dd33280,PodSandboxId:ef5f8f7e99bf9dacb62c3838a5f4c74f07b91c569ea141c1ad6ce404bdc29fef,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1703506726098310650,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2whnq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1864699b-6bf2-4ad8-9a46-98d559373002,},Annotations:map[string]string{io.kubernetes.container.hash: 163e449c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71e5511bfc97af2d48f6d941c85047d163ea6aa194ea0e70c6ea914d7f4bbc2,PodSandboxId:39aabb609fdc857c8c220bb0c3c580d564b5ef65d82f93cf6b5dd8ffebc3d2af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703506707827994954,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecc0bf3-4efa-47d2-a9d7-cd32744f43a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85d7bb68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:626bf7d4c44646a2b4ca3af801fae6b79ffb11a3b4af7ac4c3617545040c9973,PodSandboxId:39aabb609fdc857c8c220bb0c3c580d564b5ef65d82f93cf6b5dd8ffebc3d2af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703506675328433610,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecc0bf3-4efa-47d2-a9d7-cd32744f43a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85d7bb68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073ec7e3c27285a469cc9a5fa26d29bf6d2b1d79ec1e12e6ea311c4ccb103baf,PodSandboxId:debd19842631029ce01c6c06b81156f5c3aa5f7851f85e30d69cd7a47b685b0c,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727
bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1703506674205409235,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-qqs7d,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a4f764ed-ad21-4e92-ba64-e1571de7e54e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f5284d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c337c21ad9d04c3585a804b25bc1577c8efc7e43f5f8f89939094c3261b427,PodSandboxId:1141a2366a64ecebc5d6b1d1814526d76b273b377941847010c0692de8c2ffa0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.
k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703506671115806184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c4a266a-840c-4ea2-86a0-a15bf426f8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8c7cc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a119247a34a5eb2c659f78e6673da490042287cdd7995d7f01ec1b0eea73526,PodSandboxId:0764807fbd25e8e9ca0aaff4565409b80bb9437254f217c0ef31dd8e949c9449,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead066
51cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703506657749275906,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zq2p5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a13d8b3-0b95-4925-8ac4-b9a6cde3cad2,},Annotations:map[string]string{io.kubernetes.container.hash: a96a1039,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9faa0f2f538c1cd8fa6fa59e325206bbd70eb9229e1dd46cbc07c0fc8fb2cff,PodSandboxId:7db164cde84e9a8ef50f9eae66bc5366657b0a03a33deddc30b76fbca81e5268,Metadata:&ContainerMetadata{Name:
kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703506632599668638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-294911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a5ccefcad156e2d4fe90b50c547d2c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2708c1e729b6e4f8f00982329ab4db8aae1af8ad5599d3811bdfb005188c90,PodSandboxId:f263ba898aab23a2242ef7fec0160179927fbb9130a0b4eec26dd50d51ee3f9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,
},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703506632336418867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-294911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649ac67cb2602180aec9bb86895bcfc3,},Annotations:map[string]string{io.kubernetes.container.hash: 957de2da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeda9b64a7762f2136f3dde47f452baf2631db0dd670cf89827e541382a694a,PodSandboxId:f0035e977a2a8978b892d5cf903b9c06bc88464322ccfa5eb6aafcf59b591e33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e
616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703506632315121884,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-294911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a8bbab64ca8093dfc5da0fd556be75,},Annotations:map[string]string{io.kubernetes.container.hash: de63e7da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d5776b3ce0ecd53cbdd88c0141719eb635b678942ff43ab28d516f8cdf7e2f9,PodSandboxId:e761387f34de873764770c351a44dd56412a2775fba174ff5d7d6b7464a5cfe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188
be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703506632166759591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-294911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d283a23021ea44658821e429136ca8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2f6a1bba-027d-4f3a-aeca-615541c17b02 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.134428762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=da8f53be-8166-4dea-8a30-e6139542e116 name=/runtime.v1.RuntimeService/Version
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.134524894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=da8f53be-8166-4dea-8a30-e6139542e116 name=/runtime.v1.RuntimeService/Version
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.135937969Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=37a1ab24-77c7-4621-ae9a-272994513218 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.137498392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703506916137475404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574232,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=37a1ab24-77c7-4621-ae9a-272994513218 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.138197609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3d955c60-d9e6-4049-9057-d0434d59d7fb name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.138279719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3d955c60-d9e6-4049-9057-d0434d59d7fb name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:21:56 addons-294911 crio[713]: time="2023-12-25 12:21:56.138622317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6eb8b28699ac13fe13ffa59867ce203b9a61bb89d9036046504e4602fd5bfec,PodSandboxId:e88561e0b5d16c67a9d3af186f502197c2665773922a599a29fbdddff1103687,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1703506908009430940,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-wn85m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a443acea-84f5-405d-b502-3a42b7a13baa,},Annotations:map[string]string{io.kubernetes.container.hash: 811807ca,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349b448ef16ac230139f99e87e91a485d2e9cfaf6a74fb3a12803f2b0510c204,PodSandboxId:9fc311cd01f1f89b1932467c4a4f06e5165b64033de3b423ed399f80e36091a2,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1703506805337041420,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-8x7wm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 19fe76e9-2ad4-4e1f-9955-0c9f045d375a,},An
notations:map[string]string{io.kubernetes.container.hash: fec60fa,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c13bba5d9980486ca5398f2a49aab91c8ea454c0d4af728d7bf6097f47b66c1,PodSandboxId:6b32e496d124e7886b6a91e111002fd1cbdaaf9438a655726cc916e12ed9a4e1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1703506768452399782,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 6ba140c4-9acd-4f8f-a1b8-20213766cbf9,},Annotations:map[string]string{io.kubernetes.container.hash: 4d0090c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f538716f65b274641562ee1f20f5799ee7bc35d4b83f0b686c430102d99cf44e,PodSandboxId:d892d0d361d2cbae87cc2d8ae7cea42f7ff3419fcc63e7e6f7fa9971abba2d64,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1703506749313983932,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ing
ress-nginx-admission-patch-m9chn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0ba08ff7-607b-4ffa-8006-f82f1c3279d2,},Annotations:map[string]string{io.kubernetes.container.hash: 9fe6f123,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0ce0a24d7e28ce466a7b8261c289b6ec517c5c358b187c56ede5142e2abbc67,PodSandboxId:271d33a5062d35485e6932b071b2781307906f011a5b7c0921dfcde225841dba,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1703506742215488407,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-zxnrc,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 80c1e7a1-ba1b-42fa-a4e2-b2a77286fae8,},Annotations:map[string]string{io.kubernetes.container.hash: b0032121,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc2be081949e6c371066bd7174610ebc47eb2e22ed76b0fdb1230ad4dd33280,PodSandboxId:ef5f8f7e99bf9dacb62c3838a5f4c74f07b91c569ea141c1ad6ce404bdc29fef,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1703506726098310650,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2whnq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1864699b-6bf2-4ad8-9a46-98d559373002,},Annotations:map[string]string{io.kubernetes.container.hash: 163e449c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71e5511bfc97af2d48f6d941c85047d163ea6aa194ea0e70c6ea914d7f4bbc2,PodSandboxId:39aabb609fdc857c8c220bb0c3c580d564b5ef65d82f93cf6b5dd8ffebc3d2af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703506707827994954,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecc0bf3-4efa-47d2-a9d7-cd32744f43a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85d7bb68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:626bf7d4c44646a2b4ca3af801fae6b79ffb11a3b4af7ac4c3617545040c9973,PodSandboxId:39aabb609fdc857c8c220bb0c3c580d564b5ef65d82f93cf6b5dd8ffebc3d2af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703506675328433610,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecc0bf3-4efa-47d2-a9d7-cd32744f43a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85d7bb68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073ec7e3c27285a469cc9a5fa26d29bf6d2b1d79ec1e12e6ea311c4ccb103baf,PodSandboxId:debd19842631029ce01c6c06b81156f5c3aa5f7851f85e30d69cd7a47b685b0c,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727
bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1703506674205409235,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-qqs7d,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a4f764ed-ad21-4e92-ba64-e1571de7e54e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f5284d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c337c21ad9d04c3585a804b25bc1577c8efc7e43f5f8f89939094c3261b427,PodSandboxId:1141a2366a64ecebc5d6b1d1814526d76b273b377941847010c0692de8c2ffa0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.
k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703506671115806184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d9h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c4a266a-840c-4ea2-86a0-a15bf426f8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8c7cc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a119247a34a5eb2c659f78e6673da490042287cdd7995d7f01ec1b0eea73526,PodSandboxId:0764807fbd25e8e9ca0aaff4565409b80bb9437254f217c0ef31dd8e949c9449,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead066
51cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703506657749275906,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zq2p5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a13d8b3-0b95-4925-8ac4-b9a6cde3cad2,},Annotations:map[string]string{io.kubernetes.container.hash: a96a1039,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9faa0f2f538c1cd8fa6fa59e325206bbd70eb9229e1dd46cbc07c0fc8fb2cff,PodSandboxId:7db164cde84e9a8ef50f9eae66bc5366657b0a03a33deddc30b76fbca81e5268,Metadata:&ContainerMetadata{Name:
kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703506632599668638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-294911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a5ccefcad156e2d4fe90b50c547d2c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2708c1e729b6e4f8f00982329ab4db8aae1af8ad5599d3811bdfb005188c90,PodSandboxId:f263ba898aab23a2242ef7fec0160179927fbb9130a0b4eec26dd50d51ee3f9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,
},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703506632336418867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-294911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649ac67cb2602180aec9bb86895bcfc3,},Annotations:map[string]string{io.kubernetes.container.hash: 957de2da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeda9b64a7762f2136f3dde47f452baf2631db0dd670cf89827e541382a694a,PodSandboxId:f0035e977a2a8978b892d5cf903b9c06bc88464322ccfa5eb6aafcf59b591e33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e
616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703506632315121884,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-294911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a8bbab64ca8093dfc5da0fd556be75,},Annotations:map[string]string{io.kubernetes.container.hash: de63e7da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d5776b3ce0ecd53cbdd88c0141719eb635b678942ff43ab28d516f8cdf7e2f9,PodSandboxId:e761387f34de873764770c351a44dd56412a2775fba174ff5d7d6b7464a5cfe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188
be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703506632166759591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-294911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d283a23021ea44658821e429136ca8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3d955c60-d9e6-4049-9057-d0434d59d7fb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e6eb8b28699ac       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago        Running             hello-world-app           0                   e88561e0b5d16       hello-world-app-5d77478584-wn85m
	349b448ef16ac       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        About a minute ago   Running             headlamp                  0                   9fc311cd01f1f       headlamp-777fd4b855-8x7wm
	7c13bba5d9980       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago        Running             nginx                     0                   6b32e496d124e       nginx
	f538716f65b27       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             2 minutes ago        Exited              patch                     3                   d892d0d361d2c       ingress-nginx-admission-patch-m9chn
	b0ce0a24d7e28       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago        Running             gcp-auth                  0                   271d33a5062d3       gcp-auth-d4c87556c-zxnrc
	dcc2be081949e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago        Exited              create                    0                   ef5f8f7e99bf9       ingress-nginx-admission-create-2whnq
	c71e5511bfc97       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago        Running             storage-provisioner       1                   39aabb609fdc8       storage-provisioner
	626bf7d4c4464       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago        Exited              storage-provisioner       0                   39aabb609fdc8       storage-provisioner
	073ec7e3c2728       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago        Running             yakd                      0                   debd198426310       yakd-dashboard-9947fc6bf-qqs7d
	93c337c21ad9d       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago        Running             kube-proxy                0                   1141a2366a64e       kube-proxy-4d9h2
	1a119247a34a5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago        Running             coredns                   0                   0764807fbd25e       coredns-5dd5756b68-zq2p5
	b9faa0f2f538c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago        Running             kube-scheduler            0                   7db164cde84e9       kube-scheduler-addons-294911
	8b2708c1e729b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago        Running             etcd                      0                   f263ba898aab2       etcd-addons-294911
	bbeda9b64a776       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago        Running             kube-apiserver            0                   f0035e977a2a8       kube-apiserver-addons-294911
	3d5776b3ce0ec       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago        Running             kube-controller-manager   0                   e761387f34de8       kube-controller-manager-addons-294911
	
	
	==> coredns [1a119247a34a5eb2c659f78e6673da490042287cdd7995d7f01ec1b0eea73526] <==
	[INFO] 10.244.0.9:50759 - 24945 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084222s
	[INFO] 10.244.0.9:49642 - 65163 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108965s
	[INFO] 10.244.0.9:49642 - 10381 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00005996s
	[INFO] 10.244.0.9:39021 - 20823 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061052s
	[INFO] 10.244.0.9:39021 - 12117 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000194775s
	[INFO] 10.244.0.9:36203 - 14739 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118934s
	[INFO] 10.244.0.9:36203 - 37009 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076691s
	[INFO] 10.244.0.9:57031 - 46108 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191596s
	[INFO] 10.244.0.9:57031 - 31775 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000031439s
	[INFO] 10.244.0.9:55744 - 57291 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128888s
	[INFO] 10.244.0.9:55744 - 46287 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000048513s
	[INFO] 10.244.0.9:51136 - 11015 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041397s
	[INFO] 10.244.0.9:51136 - 13058 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041955s
	[INFO] 10.244.0.9:37160 - 16595 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049362s
	[INFO] 10.244.0.9:37160 - 48337 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000107377s
	[INFO] 10.244.0.22:53666 - 41261 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000254867s
	[INFO] 10.244.0.22:58295 - 56795 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000212821s
	[INFO] 10.244.0.22:40158 - 44105 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000208807s
	[INFO] 10.244.0.22:59519 - 49568 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119873s
	[INFO] 10.244.0.22:49500 - 48937 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000068269s
	[INFO] 10.244.0.22:35232 - 33433 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100373s
	[INFO] 10.244.0.22:49463 - 53919 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000682469s
	[INFO] 10.244.0.22:35237 - 60593 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.000493852s
	[INFO] 10.244.0.26:52509 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000859017s
	[INFO] 10.244.0.26:58038 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000224074s
	
	
	==> describe nodes <==
	Name:               addons-294911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-294911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=addons-294911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_25T12_17_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-294911
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 12:17:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-294911
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Dec 2023 12:21:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 12:20:25 +0000   Mon, 25 Dec 2023 12:17:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 12:20:25 +0000   Mon, 25 Dec 2023 12:17:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 12:20:25 +0000   Mon, 25 Dec 2023 12:17:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 12:20:25 +0000   Mon, 25 Dec 2023 12:17:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    addons-294911
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 a28a9d64ac7c45aeabed684d995744d9
	  System UUID:                a28a9d64-ac7c-45ae-abed-684d995744d9
	  Boot ID:                    91da275c-e9e0-4854-b976-bd84d1a5a7ac
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-wn85m         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-d4c87556c-zxnrc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  headlamp                    headlamp-777fd4b855-8x7wm                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 coredns-5dd5756b68-zq2p5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m24s
	  kube-system                 etcd-addons-294911                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-apiserver-addons-294911             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-controller-manager-addons-294911    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-proxy-4d9h2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-scheduler-addons-294911             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-qqs7d           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m     kube-proxy       
	  Normal  Starting                 4m36s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s  kubelet          Node addons-294911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s  kubelet          Node addons-294911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s  kubelet          Node addons-294911 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m36s  kubelet          Node addons-294911 status is now: NodeReady
	  Normal  RegisteredNode           4m25s  node-controller  Node addons-294911 event: Registered Node addons-294911 in Controller
	
	
	==> dmesg <==
	[  +0.150903] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.083345] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.870654] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.122263] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.144558] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.108548] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.222917] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[Dec25 12:17] systemd-fstab-generator[907]: Ignoring "noauto" for root device
	[  +9.256508] systemd-fstab-generator[1246]: Ignoring "noauto" for root device
	[ +24.184176] kauditd_printk_skb: 53 callbacks suppressed
	[ +11.639563] kauditd_printk_skb: 20 callbacks suppressed
	[Dec25 12:18] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.871264] kauditd_printk_skb: 18 callbacks suppressed
	[ +40.394745] kauditd_printk_skb: 39 callbacks suppressed
	[Dec25 12:19] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.086228] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.173780] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.008160] kauditd_printk_skb: 10 callbacks suppressed
	[ +27.625500] kauditd_printk_skb: 4 callbacks suppressed
	[Dec25 12:20] kauditd_printk_skb: 11 callbacks suppressed
	[ +39.319642] kauditd_printk_skb: 12 callbacks suppressed
	[Dec25 12:21] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [8b2708c1e729b6e4f8f00982329ab4db8aae1af8ad5599d3811bdfb005188c90] <==
	{"level":"info","ts":"2023-12-25T12:18:57.92Z","caller":"traceutil/trace.go:171","msg":"trace[1545318371] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"350.792526ms","start":"2023-12-25T12:18:57.569197Z","end":"2023-12-25T12:18:57.919989Z","steps":["trace[1545318371] 'process raft request'  (duration: 350.340463ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T12:18:57.922542Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T12:18:57.569181Z","time spent":"353.316353ms","remote":"127.0.0.1:60538","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2186,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/snapshot-controller-58dbcc7b99\" mod_revision:1016 > success:<request_put:<key:\"/registry/replicasets/kube-system/snapshot-controller-58dbcc7b99\" value_size:2114 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/snapshot-controller-58dbcc7b99\" > >"}
	{"level":"info","ts":"2023-12-25T12:18:57.922677Z","caller":"traceutil/trace.go:171","msg":"trace[983876038] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1110; }","duration":"159.342883ms","start":"2023-12-25T12:18:57.763326Z","end":"2023-12-25T12:18:57.922669Z","steps":["trace[983876038] 'agreement among raft nodes before linearized reading'  (duration: 156.276858ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T12:19:02.09403Z","caller":"traceutil/trace.go:171","msg":"trace[1419387672] transaction","detail":"{read_only:false; response_revision:1144; number_of_response:1; }","duration":"133.275988ms","start":"2023-12-25T12:19:01.960732Z","end":"2023-12-25T12:19:02.094008Z","steps":["trace[1419387672] 'process raft request'  (duration: 127.209423ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T12:19:04.057341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.66271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82491"}
	{"level":"info","ts":"2023-12-25T12:19:04.057717Z","caller":"traceutil/trace.go:171","msg":"trace[1689161114] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1164; }","duration":"294.047908ms","start":"2023-12-25T12:19:03.763656Z","end":"2023-12-25T12:19:04.057704Z","steps":["trace[1689161114] 'range keys from in-memory index tree'  (duration: 293.465644ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T12:19:28.339225Z","caller":"traceutil/trace.go:171","msg":"trace[1849343930] linearizableReadLoop","detail":"{readStateIndex:1435; appliedIndex:1434; }","duration":"231.8157ms","start":"2023-12-25T12:19:28.107371Z","end":"2023-12-25T12:19:28.339186Z","steps":["trace[1849343930] 'read index received'  (duration: 231.684183ms)","trace[1849343930] 'applied index is now lower than readState.Index'  (duration: 131.065µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-25T12:19:28.339581Z","caller":"traceutil/trace.go:171","msg":"trace[2007627337] transaction","detail":"{read_only:false; response_revision:1388; number_of_response:1; }","duration":"295.7874ms","start":"2023-12-25T12:19:28.04377Z","end":"2023-12-25T12:19:28.339557Z","steps":["trace[2007627337] 'process raft request'  (duration: 295.329208ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T12:19:28.339949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.479171ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/\" range_end:\"/registry/pods/gadget0\" ","response":"range_response_count:1 size:9329"}
	{"level":"info","ts":"2023-12-25T12:19:28.339977Z","caller":"traceutil/trace.go:171","msg":"trace[638664836] range","detail":"{range_begin:/registry/pods/gadget/; range_end:/registry/pods/gadget0; response_count:1; response_revision:1388; }","duration":"232.64987ms","start":"2023-12-25T12:19:28.10732Z","end":"2023-12-25T12:19:28.33997Z","steps":["trace[638664836] 'agreement among raft nodes before linearized reading'  (duration: 232.417362ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T12:19:59.287359Z","caller":"traceutil/trace.go:171","msg":"trace[1484603605] linearizableReadLoop","detail":"{readStateIndex:1596; appliedIndex:1595; }","duration":"192.270705ms","start":"2023-12-25T12:19:59.095073Z","end":"2023-12-25T12:19:59.287344Z","steps":["trace[1484603605] 'read index received'  (duration: 192.140212ms)","trace[1484603605] 'applied index is now lower than readState.Index'  (duration: 130.143µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-25T12:19:59.287711Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.587584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-12-25T12:19:59.288117Z","caller":"traceutil/trace.go:171","msg":"trace[1663890221] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1540; }","duration":"193.055714ms","start":"2023-12-25T12:19:59.095049Z","end":"2023-12-25T12:19:59.288105Z","steps":["trace[1663890221] 'agreement among raft nodes before linearized reading'  (duration: 192.513592ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T12:19:59.288042Z","caller":"traceutil/trace.go:171","msg":"trace[1747103306] transaction","detail":"{read_only:false; response_revision:1540; number_of_response:1; }","duration":"213.772743ms","start":"2023-12-25T12:19:59.074256Z","end":"2023-12-25T12:19:59.288029Z","steps":["trace[1747103306] 'process raft request'  (duration: 213.006094ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T12:20:01.936297Z","caller":"traceutil/trace.go:171","msg":"trace[1397842089] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"149.917709ms","start":"2023-12-25T12:20:01.786364Z","end":"2023-12-25T12:20:01.936282Z","steps":["trace[1397842089] 'process raft request'  (duration: 149.789862ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T12:20:05.032592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"319.396301ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6100843267948422708 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/persistentvolumes/pvc-37d7a09c-0a9c-4b2b-bb46-7c13bcbe977d\" mod_revision:1607 > success:<request_delete_range:<key:\"/registry/persistentvolumes/pvc-37d7a09c-0a9c-4b2b-bb46-7c13bcbe977d\" > > failure:<request_range:<key:\"/registry/persistentvolumes/pvc-37d7a09c-0a9c-4b2b-bb46-7c13bcbe977d\" > >>","response":"size:18"}
	{"level":"info","ts":"2023-12-25T12:20:05.032737Z","caller":"traceutil/trace.go:171","msg":"trace[447021520] transaction","detail":"{read_only:false; response_revision:1609; number_of_response:1; }","duration":"369.593236ms","start":"2023-12-25T12:20:04.663133Z","end":"2023-12-25T12:20:05.032726Z","steps":["trace[447021520] 'process raft request'  (duration: 369.546327ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T12:20:05.032796Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T12:20:04.663113Z","time spent":"369.648424ms","remote":"127.0.0.1:60464","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":614,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" mod_revision:0 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" value_size:553 >> failure:<>"}
	{"level":"info","ts":"2023-12-25T12:20:05.033089Z","caller":"traceutil/trace.go:171","msg":"trace[452357381] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1608; }","duration":"478.784882ms","start":"2023-12-25T12:20:04.554294Z","end":"2023-12-25T12:20:05.033079Z","steps":["trace[452357381] 'process raft request'  (duration: 158.407902ms)","trace[452357381] 'compare'  (duration: 319.195025ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-25T12:20:05.033184Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T12:20:04.55428Z","time spent":"478.871768ms","remote":"127.0.0.1:60462","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":72,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/persistentvolumes/pvc-37d7a09c-0a9c-4b2b-bb46-7c13bcbe977d\" mod_revision:1607 > success:<request_delete_range:<key:\"/registry/persistentvolumes/pvc-37d7a09c-0a9c-4b2b-bb46-7c13bcbe977d\" > > failure:<request_range:<key:\"/registry/persistentvolumes/pvc-37d7a09c-0a9c-4b2b-bb46-7c13bcbe977d\" > >"}
	{"level":"info","ts":"2023-12-25T12:20:05.033286Z","caller":"traceutil/trace.go:171","msg":"trace[1625718955] linearizableReadLoop","detail":"{readStateIndex:1665; appliedIndex:1664; }","duration":"426.476323ms","start":"2023-12-25T12:20:04.606804Z","end":"2023-12-25T12:20:05.03328Z","steps":["trace[1625718955] 'read index received'  (duration: 105.907024ms)","trace[1625718955] 'applied index is now lower than readState.Index'  (duration: 320.568737ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-25T12:20:05.033372Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"426.582414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2023-12-25T12:20:05.033421Z","caller":"traceutil/trace.go:171","msg":"trace[477828859] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1609; }","duration":"426.629885ms","start":"2023-12-25T12:20:04.60678Z","end":"2023-12-25T12:20:05.03341Z","steps":["trace[477828859] 'agreement among raft nodes before linearized reading'  (duration: 426.554535ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T12:20:05.033442Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T12:20:04.606768Z","time spent":"426.668759ms","remote":"127.0.0.1:60470","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3775,"request content":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" "}
	{"level":"info","ts":"2023-12-25T12:20:35.492709Z","caller":"traceutil/trace.go:171","msg":"trace[315872011] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1700; }","duration":"118.05595ms","start":"2023-12-25T12:20:35.374639Z","end":"2023-12-25T12:20:35.492695Z","steps":["trace[315872011] 'process raft request'  (duration: 117.932761ms)"],"step_count":1}
	
	
	==> gcp-auth [b0ce0a24d7e28ce466a7b8261c289b6ec517c5c358b187c56ede5142e2abbc67] <==
	2023/12/25 12:19:02 GCP Auth Webhook started!
	2023/12/25 12:19:07 Ready to marshal response ...
	2023/12/25 12:19:07 Ready to write response ...
	2023/12/25 12:19:07 Ready to marshal response ...
	2023/12/25 12:19:07 Ready to write response ...
	2023/12/25 12:19:18 Ready to marshal response ...
	2023/12/25 12:19:18 Ready to write response ...
	2023/12/25 12:19:18 Ready to marshal response ...
	2023/12/25 12:19:18 Ready to write response ...
	2023/12/25 12:19:24 Ready to marshal response ...
	2023/12/25 12:19:24 Ready to write response ...
	2023/12/25 12:19:47 Ready to marshal response ...
	2023/12/25 12:19:47 Ready to write response ...
	2023/12/25 12:19:47 Ready to marshal response ...
	2023/12/25 12:19:47 Ready to write response ...
	2023/12/25 12:19:59 Ready to marshal response ...
	2023/12/25 12:19:59 Ready to write response ...
	2023/12/25 12:19:59 Ready to marshal response ...
	2023/12/25 12:19:59 Ready to write response ...
	2023/12/25 12:19:59 Ready to marshal response ...
	2023/12/25 12:19:59 Ready to write response ...
	2023/12/25 12:20:25 Ready to marshal response ...
	2023/12/25 12:20:25 Ready to write response ...
	2023/12/25 12:21:45 Ready to marshal response ...
	2023/12/25 12:21:45 Ready to write response ...
	
	
	==> kernel <==
	 12:21:56 up 5 min,  0 users,  load average: 1.28, 2.26, 1.16
	Linux addons-294911 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [bbeda9b64a7762f2136f3dde47f452baf2631db0dd670cf89827e541382a694a] <==
	I1225 12:19:31.732658       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1225 12:19:32.766826       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E1225 12:19:34.338367       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1225 12:19:59.511994       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.48.33"}
	I1225 12:20:01.785591       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1225 12:20:42.450678       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1225 12:20:42.451088       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1225 12:20:42.484425       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1225 12:20:42.484723       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1225 12:20:42.553500       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1225 12:20:42.553673       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1225 12:20:42.581796       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1225 12:20:42.582032       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1225 12:20:42.592210       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1225 12:20:42.592323       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1225 12:20:42.595284       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1225 12:20:42.595391       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1225 12:20:42.614523       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1225 12:20:42.614591       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1225 12:20:42.625789       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1225 12:20:42.625954       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1225 12:20:43.582994       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1225 12:20:43.626147       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1225 12:20:43.636606       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1225 12:21:45.370303       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.6.120"}
	
	
	==> kube-controller-manager [3d5776b3ce0ecd53cbdd88c0141719eb635b678942ff43ab28d516f8cdf7e2f9] <==
	W1225 12:21:02.717929       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1225 12:21:02.718023       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1225 12:21:02.876625       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1225 12:21:02.876678       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1225 12:21:19.717732       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1225 12:21:19.718072       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1225 12:21:21.435509       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1225 12:21:21.435669       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1225 12:21:27.727613       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1225 12:21:27.727800       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1225 12:21:35.463519       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1225 12:21:35.463633       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1225 12:21:45.098805       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1225 12:21:45.143095       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-wn85m"
	I1225 12:21:45.168723       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="69.458328ms"
	I1225 12:21:45.185156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.359951ms"
	I1225 12:21:45.185289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.468µs"
	I1225 12:21:45.185397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="52.587µs"
	I1225 12:21:48.071478       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1225 12:21:48.073503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="7.935µs"
	I1225 12:21:48.086458       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1225 12:21:48.909507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.398759ms"
	I1225 12:21:48.910618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="69.104µs"
	W1225 12:21:50.208252       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1225 12:21:50.208404       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [93c337c21ad9d04c3585a804b25bc1577c8efc7e43f5f8f89939094c3261b427] <==
	I1225 12:17:54.997354       1 server_others.go:69] "Using iptables proxy"
	I1225 12:17:55.160623       1 node.go:141] Successfully retrieved node IP: 192.168.39.148
	I1225 12:17:55.539287       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1225 12:17:55.539334       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1225 12:17:55.616236       1 server_others.go:152] "Using iptables Proxier"
	I1225 12:17:55.616278       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1225 12:17:55.616479       1 server.go:846] "Version info" version="v1.28.4"
	I1225 12:17:55.616489       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 12:17:55.631520       1 config.go:188] "Starting service config controller"
	I1225 12:17:55.631564       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1225 12:17:55.631593       1 config.go:97] "Starting endpoint slice config controller"
	I1225 12:17:55.631596       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1225 12:17:55.639256       1 config.go:315] "Starting node config controller"
	I1225 12:17:55.639293       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1225 12:17:55.732447       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1225 12:17:55.743343       1 shared_informer.go:318] Caches are synced for node config
	I1225 12:17:55.732784       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b9faa0f2f538c1cd8fa6fa59e325206bbd70eb9229e1dd46cbc07c0fc8fb2cff] <==
	W1225 12:17:16.584310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1225 12:17:16.584350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1225 12:17:17.440972       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1225 12:17:17.441027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1225 12:17:17.486083       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1225 12:17:17.486148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1225 12:17:17.515635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1225 12:17:17.515690       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1225 12:17:17.578183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1225 12:17:17.578284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1225 12:17:17.664396       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1225 12:17:17.664483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1225 12:17:17.664639       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1225 12:17:17.664789       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1225 12:17:17.725218       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1225 12:17:17.725267       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 12:17:17.790102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1225 12:17:17.790170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1225 12:17:17.806705       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1225 12:17:17.806953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1225 12:17:17.863581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1225 12:17:17.863666       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1225 12:17:17.924629       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1225 12:17:17.924714       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1225 12:17:20.573094       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2023-12-25 12:16:45 UTC, ends at Mon 2023-12-25 12:21:56 UTC. --
	Dec 25 12:21:45 addons-294911 kubelet[1253]: I1225 12:21:45.155501    1253 memory_manager.go:346] "RemoveStaleState removing state" podUID="6bcddd53-09ea-49c4-8cf8-dea756c13bcb" containerName="volume-snapshot-controller"
	Dec 25 12:21:45 addons-294911 kubelet[1253]: I1225 12:21:45.155506    1253 memory_manager.go:346] "RemoveStaleState removing state" podUID="cb471e2e-c800-4c0a-b52a-8b4f9e64737b" containerName="csi-provisioner"
	Dec 25 12:21:45 addons-294911 kubelet[1253]: I1225 12:21:45.245475    1253 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzz2w\" (UniqueName: \"kubernetes.io/projected/a443acea-84f5-405d-b502-3a42b7a13baa-kube-api-access-fzz2w\") pod \"hello-world-app-5d77478584-wn85m\" (UID: \"a443acea-84f5-405d-b502-3a42b7a13baa\") " pod="default/hello-world-app-5d77478584-wn85m"
	Dec 25 12:21:45 addons-294911 kubelet[1253]: I1225 12:21:45.245519    1253 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a443acea-84f5-405d-b502-3a42b7a13baa-gcp-creds\") pod \"hello-world-app-5d77478584-wn85m\" (UID: \"a443acea-84f5-405d-b502-3a42b7a13baa\") " pod="default/hello-world-app-5d77478584-wn85m"
	Dec 25 12:21:46 addons-294911 kubelet[1253]: I1225 12:21:46.556491    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpqtb\" (UniqueName: \"kubernetes.io/projected/7ba3b010-cc56-4e35-8bef-1bb4ef70e8f3-kube-api-access-vpqtb\") pod \"7ba3b010-cc56-4e35-8bef-1bb4ef70e8f3\" (UID: \"7ba3b010-cc56-4e35-8bef-1bb4ef70e8f3\") "
	Dec 25 12:21:46 addons-294911 kubelet[1253]: I1225 12:21:46.559551    1253 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ba3b010-cc56-4e35-8bef-1bb4ef70e8f3-kube-api-access-vpqtb" (OuterVolumeSpecName: "kube-api-access-vpqtb") pod "7ba3b010-cc56-4e35-8bef-1bb4ef70e8f3" (UID: "7ba3b010-cc56-4e35-8bef-1bb4ef70e8f3"). InnerVolumeSpecName "kube-api-access-vpqtb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 25 12:21:46 addons-294911 kubelet[1253]: I1225 12:21:46.657187    1253 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vpqtb\" (UniqueName: \"kubernetes.io/projected/7ba3b010-cc56-4e35-8bef-1bb4ef70e8f3-kube-api-access-vpqtb\") on node \"addons-294911\" DevicePath \"\""
	Dec 25 12:21:46 addons-294911 kubelet[1253]: I1225 12:21:46.842409    1253 scope.go:117] "RemoveContainer" containerID="7d5e3acd78d5b44f2620fa337e5918c83d5f825c6046d4b1671e26e01b925196"
	Dec 25 12:21:47 addons-294911 kubelet[1253]: I1225 12:21:47.062224    1253 scope.go:117] "RemoveContainer" containerID="7d5e3acd78d5b44f2620fa337e5918c83d5f825c6046d4b1671e26e01b925196"
	Dec 25 12:21:47 addons-294911 kubelet[1253]: E1225 12:21:47.063210    1253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5e3acd78d5b44f2620fa337e5918c83d5f825c6046d4b1671e26e01b925196\": container with ID starting with 7d5e3acd78d5b44f2620fa337e5918c83d5f825c6046d4b1671e26e01b925196 not found: ID does not exist" containerID="7d5e3acd78d5b44f2620fa337e5918c83d5f825c6046d4b1671e26e01b925196"
	Dec 25 12:21:47 addons-294911 kubelet[1253]: I1225 12:21:47.063249    1253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5e3acd78d5b44f2620fa337e5918c83d5f825c6046d4b1671e26e01b925196"} err="failed to get container status \"7d5e3acd78d5b44f2620fa337e5918c83d5f825c6046d4b1671e26e01b925196\": rpc error: code = NotFound desc = could not find container \"7d5e3acd78d5b44f2620fa337e5918c83d5f825c6046d4b1671e26e01b925196\": container with ID starting with 7d5e3acd78d5b44f2620fa337e5918c83d5f825c6046d4b1671e26e01b925196 not found: ID does not exist"
	Dec 25 12:21:48 addons-294911 kubelet[1253]: I1225 12:21:48.289010    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0ba08ff7-607b-4ffa-8006-f82f1c3279d2" path="/var/lib/kubelet/pods/0ba08ff7-607b-4ffa-8006-f82f1c3279d2/volumes"
	Dec 25 12:21:48 addons-294911 kubelet[1253]: I1225 12:21:48.289520    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1864699b-6bf2-4ad8-9a46-98d559373002" path="/var/lib/kubelet/pods/1864699b-6bf2-4ad8-9a46-98d559373002/volumes"
	Dec 25 12:21:48 addons-294911 kubelet[1253]: I1225 12:21:48.290134    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7ba3b010-cc56-4e35-8bef-1bb4ef70e8f3" path="/var/lib/kubelet/pods/7ba3b010-cc56-4e35-8bef-1bb4ef70e8f3/volumes"
	Dec 25 12:21:51 addons-294911 kubelet[1253]: I1225 12:21:51.392569    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrlk8\" (UniqueName: \"kubernetes.io/projected/f1987d61-e072-4639-815a-50054e71db47-kube-api-access-xrlk8\") pod \"f1987d61-e072-4639-815a-50054e71db47\" (UID: \"f1987d61-e072-4639-815a-50054e71db47\") "
	Dec 25 12:21:51 addons-294911 kubelet[1253]: I1225 12:21:51.392686    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f1987d61-e072-4639-815a-50054e71db47-webhook-cert\") pod \"f1987d61-e072-4639-815a-50054e71db47\" (UID: \"f1987d61-e072-4639-815a-50054e71db47\") "
	Dec 25 12:21:51 addons-294911 kubelet[1253]: I1225 12:21:51.397331    1253 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1987d61-e072-4639-815a-50054e71db47-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f1987d61-e072-4639-815a-50054e71db47" (UID: "f1987d61-e072-4639-815a-50054e71db47"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 25 12:21:51 addons-294911 kubelet[1253]: I1225 12:21:51.398694    1253 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1987d61-e072-4639-815a-50054e71db47-kube-api-access-xrlk8" (OuterVolumeSpecName: "kube-api-access-xrlk8") pod "f1987d61-e072-4639-815a-50054e71db47" (UID: "f1987d61-e072-4639-815a-50054e71db47"). InnerVolumeSpecName "kube-api-access-xrlk8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 25 12:21:51 addons-294911 kubelet[1253]: I1225 12:21:51.493699    1253 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xrlk8\" (UniqueName: \"kubernetes.io/projected/f1987d61-e072-4639-815a-50054e71db47-kube-api-access-xrlk8\") on node \"addons-294911\" DevicePath \"\""
	Dec 25 12:21:51 addons-294911 kubelet[1253]: I1225 12:21:51.493737    1253 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f1987d61-e072-4639-815a-50054e71db47-webhook-cert\") on node \"addons-294911\" DevicePath \"\""
	Dec 25 12:21:51 addons-294911 kubelet[1253]: I1225 12:21:51.896506    1253 scope.go:117] "RemoveContainer" containerID="c3865028f4c4abf9b5211abd54a380f8d178ab38db4109e54e8fcad016b287d7"
	Dec 25 12:21:51 addons-294911 kubelet[1253]: I1225 12:21:51.931772    1253 scope.go:117] "RemoveContainer" containerID="c3865028f4c4abf9b5211abd54a380f8d178ab38db4109e54e8fcad016b287d7"
	Dec 25 12:21:51 addons-294911 kubelet[1253]: E1225 12:21:51.932333    1253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3865028f4c4abf9b5211abd54a380f8d178ab38db4109e54e8fcad016b287d7\": container with ID starting with c3865028f4c4abf9b5211abd54a380f8d178ab38db4109e54e8fcad016b287d7 not found: ID does not exist" containerID="c3865028f4c4abf9b5211abd54a380f8d178ab38db4109e54e8fcad016b287d7"
	Dec 25 12:21:51 addons-294911 kubelet[1253]: I1225 12:21:51.932383    1253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3865028f4c4abf9b5211abd54a380f8d178ab38db4109e54e8fcad016b287d7"} err="failed to get container status \"c3865028f4c4abf9b5211abd54a380f8d178ab38db4109e54e8fcad016b287d7\": rpc error: code = NotFound desc = could not find container \"c3865028f4c4abf9b5211abd54a380f8d178ab38db4109e54e8fcad016b287d7\": container with ID starting with c3865028f4c4abf9b5211abd54a380f8d178ab38db4109e54e8fcad016b287d7 not found: ID does not exist"
	Dec 25 12:21:52 addons-294911 kubelet[1253]: I1225 12:21:52.288628    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f1987d61-e072-4639-815a-50054e71db47" path="/var/lib/kubelet/pods/f1987d61-e072-4639-815a-50054e71db47/volumes"
	
	
	==> storage-provisioner [626bf7d4c44646a2b4ca3af801fae6b79ffb11a3b4af7ac4c3617545040c9973] <==
	I1225 12:17:57.263053       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 12:18:27.299279       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c71e5511bfc97af2d48f6d941c85047d163ea6aa194ea0e70c6ea914d7f4bbc2] <==
	I1225 12:18:28.199891       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 12:18:28.221817       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 12:18:28.221932       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 12:18:28.236925       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa8e5e61-f729-49c1-b376-348372ee7926", APIVersion:"v1", ResourceVersion:"956", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-294911_956899f5-21a5-4368-9d19-efface0936b1 became leader
	I1225 12:18:28.237995       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 12:18:28.238375       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-294911_956899f5-21a5-4368-9d19-efface0936b1!
	I1225 12:18:28.339274       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-294911_956899f5-21a5-4368-9d19-efface0936b1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-294911 -n addons-294911
helpers_test.go:261: (dbg) Run:  kubectl --context addons-294911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-294911
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-294911: exit status 82 (2m1.502833221s)

                                                
                                                
-- stdout --
	* Stopping node "addons-294911"  ...
	* Stopping node "addons-294911"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-294911" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-294911
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-294911: exit status 11 (21.557399535s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-294911" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-294911
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-294911: exit status 11 (6.14589408s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-294911" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-294911
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-294911: exit status 11 (6.140340684s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-294911" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E1225 12:29:07.505761 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 12:29:07.666176 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 12:29:07.986663 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.020414966s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-467117
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image load --daemon gcr.io/google-containers/addon-resizer:functional-467117 --alsologtostderr
E1225 12:29:08.627232 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 image load --daemon gcr.io/google-containers/addon-resizer:functional-467117 --alsologtostderr: (11.981230514s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 image ls: (2.394092196s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-467117" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-467117 /tmp/TestFunctionalparallelMountCmdspecific-port3153413480/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.812753ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.427519ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.057169ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (227.87192ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (221.736128ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2023/12/25 12:29:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (217.810542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E1225 12:29:48.311514 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (213.684731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 11.184629032s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (220.747902ms)

                                                
                                                
-- stdout --
	total 0
	drwxr-xr-x  2 root root  40 Dec 25 12:29 .
	drwxr-xr-x 20 root root 560 Dec 25 12:29 ..
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-amd64 -p functional-467117 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "sudo umount -f /mount-9p": exit status 1 (219.30631ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-467117 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-467117 /tmp/TestFunctionalparallelMountCmdspecific-port3153413480/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-467117 /tmp/TestFunctionalparallelMountCmdspecific-port3153413480/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-467117 /tmp/TestFunctionalparallelMountCmdspecific-port3153413480/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I1225 12:29:39.899123 1458506 out.go:296] Setting OutFile to fd 1 ...
I1225 12:29:39.899467 1458506 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1225 12:29:39.899480 1458506 out.go:309] Setting ErrFile to fd 2...
I1225 12:29:39.899488 1458506 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1225 12:29:39.899784 1458506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
I1225 12:29:39.900114 1458506 mustload.go:65] Loading cluster: functional-467117
I1225 12:29:39.900512 1458506 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1225 12:29:39.900907 1458506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1225 12:29:39.900960 1458506 main.go:141] libmachine: Launching plugin server for driver kvm2
I1225 12:29:39.916307 1458506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
I1225 12:29:39.916695 1458506 main.go:141] libmachine: () Calling .GetVersion
I1225 12:29:39.917318 1458506 main.go:141] libmachine: Using API Version  1
I1225 12:29:39.917343 1458506 main.go:141] libmachine: () Calling .SetConfigRaw
I1225 12:29:39.917834 1458506 main.go:141] libmachine: () Calling .GetMachineName
I1225 12:29:39.918092 1458506 main.go:141] libmachine: (functional-467117) Calling .GetState
I1225 12:29:39.920103 1458506 host.go:66] Checking if "functional-467117" exists ...
I1225 12:29:39.920583 1458506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1225 12:29:39.920637 1458506 main.go:141] libmachine: Launching plugin server for driver kvm2
I1225 12:29:39.936172 1458506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43705
I1225 12:29:39.936613 1458506 main.go:141] libmachine: () Calling .GetVersion
I1225 12:29:39.937118 1458506 main.go:141] libmachine: Using API Version  1
I1225 12:29:39.937138 1458506 main.go:141] libmachine: () Calling .SetConfigRaw
I1225 12:29:39.937502 1458506 main.go:141] libmachine: () Calling .GetMachineName
I1225 12:29:39.937814 1458506 main.go:141] libmachine: (functional-467117) Calling .DriverName
I1225 12:29:39.937951 1458506 main.go:141] libmachine: (functional-467117) Calling .DriverName
I1225 12:29:39.938085 1458506 main.go:141] libmachine: (functional-467117) Calling .GetIP
I1225 12:29:39.941067 1458506 main.go:141] libmachine: (functional-467117) DBG | domain functional-467117 has defined MAC address 52:54:00:49:22:1d in network mk-functional-467117
I1225 12:29:39.941471 1458506 main.go:141] libmachine: (functional-467117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:22:1d", ip: ""} in network mk-functional-467117: {Iface:virbr1 ExpiryTime:2023-12-25 13:26:04 +0000 UTC Type:0 Mac:52:54:00:49:22:1d Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-467117 Clientid:01:52:54:00:49:22:1d}
I1225 12:29:39.941507 1458506 main.go:141] libmachine: (functional-467117) DBG | domain functional-467117 has defined IP address 192.168.39.76 and MAC address 52:54:00:49:22:1d in network mk-functional-467117
I1225 12:29:39.944799 1458506 out.go:177] 
W1225 12:29:39.946031 1458506 out.go:239] X Exiting due to IF_MOUNT_PORT: Error finding port for mount: Error accessing port 46464
X Exiting due to IF_MOUNT_PORT: Error finding port for mount: Error accessing port 46464
W1225 12:29:39.946046 1458506 out.go:239] * 
* 
W1225 12:29:39.959956 1458506 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_ae9b78d55725b06f38ae64b56e3272c581e09edd_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_ae9b78d55725b06f38ae64b56e3272c581e09edd_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1225 12:29:39.961862 1458506 out.go:177] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (11.73s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (180.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-441885 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-441885 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.617457217s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-441885 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-441885 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4d52d7f6-a780-4dc5-94e1-b327e6f9d7a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4d52d7f6-a780-4dc5-94e1-b327e6f9d7a4] Running
E1225 12:31:51.193112 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.005185633s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-441885 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1225 12:33:56.706986 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:33:56.712290 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:33:56.722528 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:33:56.742822 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:33:56.783173 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:33:56.863533 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:33:57.023963 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:33:57.344633 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:33:57.985882 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:33:59.266493 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:34:01.828322 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-441885 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.175212263s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-441885 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-441885 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.118
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-441885 addons disable ingress-dns --alsologtostderr -v=1
E1225 12:34:06.949442 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:34:07.347929 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-441885 addons disable ingress-dns --alsologtostderr -v=1: (10.738206707s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-441885 addons disable ingress --alsologtostderr -v=1
E1225 12:34:17.190655 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-441885 addons disable ingress --alsologtostderr -v=1: (7.610625925s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-441885 -n ingress-addon-legacy-441885
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-441885 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-441885 logs -n 25: (1.213735615s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount   | -p functional-467117                                                     | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdspecific-port3153413480/001:/mount-9p |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                      |                             |         |         |                     |                     |
	| ssh     | functional-467117 ssh findmnt                                            | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-467117 ssh findmnt                                            | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-467117 ssh findmnt                                            | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-467117 ssh findmnt                                            | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-467117 ssh findmnt                                            | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-467117 ssh findmnt                                            | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-467117 ssh mount |                                            | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | grep 9p; ls -la /mount-9p; cat                                           |                             |         |         |                     |                     |
	|         | /mount-9p/pod-dates                                                      |                             |         |         |                     |                     |
	| ssh     | functional-467117 ssh sudo                                               | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | umount -f /mount-9p                                                      |                             |         |         |                     |                     |
	| mount   | -p functional-467117                                                     | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2525179329/001:/mount1   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| mount   | -p functional-467117                                                     | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2525179329/001:/mount2   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| ssh     | functional-467117 ssh findmnt                                            | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | -T /mount1                                                               |                             |         |         |                     |                     |
	| mount   | -p functional-467117                                                     | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2525179329/001:/mount3   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| ssh     | functional-467117 ssh findmnt                                            | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC | 25 Dec 23 12:29 UTC |
	|         | -T /mount1                                                               |                             |         |         |                     |                     |
	| ssh     | functional-467117 ssh findmnt                                            | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC | 25 Dec 23 12:29 UTC |
	|         | -T /mount2                                                               |                             |         |         |                     |                     |
	| ssh     | functional-467117 ssh findmnt                                            | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC | 25 Dec 23 12:29 UTC |
	|         | -T /mount3                                                               |                             |         |         |                     |                     |
	| mount   | -p functional-467117                                                     | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC |                     |
	|         | --kill=true                                                              |                             |         |         |                     |                     |
	| delete  | -p functional-467117                                                     | functional-467117           | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC | 25 Dec 23 12:29 UTC |
	| start   | -p ingress-addon-legacy-441885                                           | ingress-addon-legacy-441885 | jenkins | v1.32.0 | 25 Dec 23 12:29 UTC | 25 Dec 23 12:31 UTC |
	|         | --kubernetes-version=v1.18.20                                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                                       |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-441885                                              | ingress-addon-legacy-441885 | jenkins | v1.32.0 | 25 Dec 23 12:31 UTC | 25 Dec 23 12:31 UTC |
	|         | addons enable ingress                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-441885                                              | ingress-addon-legacy-441885 | jenkins | v1.32.0 | 25 Dec 23 12:31 UTC | 25 Dec 23 12:31 UTC |
	|         | addons enable ingress-dns                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-441885                                              | ingress-addon-legacy-441885 | jenkins | v1.32.0 | 25 Dec 23 12:31 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-441885 ip                                           | ingress-addon-legacy-441885 | jenkins | v1.32.0 | 25 Dec 23 12:34 UTC | 25 Dec 23 12:34 UTC |
	| addons  | ingress-addon-legacy-441885                                              | ingress-addon-legacy-441885 | jenkins | v1.32.0 | 25 Dec 23 12:34 UTC | 25 Dec 23 12:34 UTC |
	|         | addons disable ingress-dns                                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-441885                                              | ingress-addon-legacy-441885 | jenkins | v1.32.0 | 25 Dec 23 12:34 UTC | 25 Dec 23 12:34 UTC |
	|         | addons disable ingress                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 12:29:54
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 12:29:54.602558 1459169 out.go:296] Setting OutFile to fd 1 ...
	I1225 12:29:54.602839 1459169 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:29:54.602848 1459169 out.go:309] Setting ErrFile to fd 2...
	I1225 12:29:54.602853 1459169 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:29:54.603055 1459169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 12:29:54.603690 1459169 out.go:303] Setting JSON to false
	I1225 12:29:54.604728 1459169 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":155548,"bootTime":1703351847,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 12:29:54.604799 1459169 start.go:138] virtualization: kvm guest
	I1225 12:29:54.607473 1459169 out.go:177] * [ingress-addon-legacy-441885] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 12:29:54.609313 1459169 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 12:29:54.609264 1459169 notify.go:220] Checking for updates...
	I1225 12:29:54.610913 1459169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 12:29:54.612504 1459169 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:29:54.614079 1459169 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:29:54.615727 1459169 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 12:29:54.617250 1459169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 12:29:54.618787 1459169 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 12:29:54.658050 1459169 out.go:177] * Using the kvm2 driver based on user configuration
	I1225 12:29:54.659438 1459169 start.go:298] selected driver: kvm2
	I1225 12:29:54.659462 1459169 start.go:902] validating driver "kvm2" against <nil>
	I1225 12:29:54.659478 1459169 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 12:29:54.660280 1459169 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 12:29:54.660383 1459169 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 12:29:54.676089 1459169 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 12:29:54.676201 1459169 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1225 12:29:54.676462 1459169 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 12:29:54.676535 1459169 cni.go:84] Creating CNI manager for ""
	I1225 12:29:54.676561 1459169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 12:29:54.676584 1459169 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1225 12:29:54.676675 1459169 start_flags.go:323] config:
	{Name:ingress-addon-legacy-441885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-441885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:29:54.676902 1459169 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 12:29:54.679798 1459169 out.go:177] * Starting control plane node ingress-addon-legacy-441885 in cluster ingress-addon-legacy-441885
	I1225 12:29:54.681287 1459169 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1225 12:29:54.702549 1459169 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1225 12:29:54.702597 1459169 cache.go:56] Caching tarball of preloaded images
	I1225 12:29:54.702772 1459169 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1225 12:29:54.705021 1459169 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1225 12:29:54.706620 1459169 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1225 12:29:54.733719 1459169 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1225 12:29:57.673699 1459169 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1225 12:29:57.673807 1459169 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1225 12:29:58.704858 1459169 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1225 12:29:58.705244 1459169 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/config.json ...
	I1225 12:29:58.705291 1459169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/config.json: {Name:mkcf4b4ac958f65e52c67e79afe78b57669a6999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:29:58.705538 1459169 start.go:365] acquiring machines lock for ingress-addon-legacy-441885: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 12:29:58.705599 1459169 start.go:369] acquired machines lock for "ingress-addon-legacy-441885" in 38.627µs
	I1225 12:29:58.705626 1459169 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-441885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-441885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 12:29:58.705730 1459169 start.go:125] createHost starting for "" (driver="kvm2")
	I1225 12:29:58.708083 1459169 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1225 12:29:58.708302 1459169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:29:58.708367 1459169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:29:58.723881 1459169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44207
	I1225 12:29:58.724319 1459169 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:29:58.724975 1459169 main.go:141] libmachine: Using API Version  1
	I1225 12:29:58.725001 1459169 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:29:58.725413 1459169 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:29:58.725655 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetMachineName
	I1225 12:29:58.725836 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .DriverName
	I1225 12:29:58.725995 1459169 start.go:159] libmachine.API.Create for "ingress-addon-legacy-441885" (driver="kvm2")
	I1225 12:29:58.726024 1459169 client.go:168] LocalClient.Create starting
	I1225 12:29:58.726058 1459169 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem
	I1225 12:29:58.726099 1459169 main.go:141] libmachine: Decoding PEM data...
	I1225 12:29:58.726114 1459169 main.go:141] libmachine: Parsing certificate...
	I1225 12:29:58.726170 1459169 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem
	I1225 12:29:58.726192 1459169 main.go:141] libmachine: Decoding PEM data...
	I1225 12:29:58.726202 1459169 main.go:141] libmachine: Parsing certificate...
	I1225 12:29:58.726219 1459169 main.go:141] libmachine: Running pre-create checks...
	I1225 12:29:58.726230 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .PreCreateCheck
	I1225 12:29:58.726621 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetConfigRaw
	I1225 12:29:58.727048 1459169 main.go:141] libmachine: Creating machine...
	I1225 12:29:58.727064 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .Create
	I1225 12:29:58.727206 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Creating KVM machine...
	I1225 12:29:58.728547 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found existing default KVM network
	I1225 12:29:58.729514 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:29:58.729331 1459203 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a00}
	I1225 12:29:58.735307 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | trying to create private KVM network mk-ingress-addon-legacy-441885 192.168.39.0/24...
	I1225 12:29:58.813633 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | private KVM network mk-ingress-addon-legacy-441885 192.168.39.0/24 created
	I1225 12:29:58.813670 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Setting up store path in /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885 ...
	I1225 12:29:58.813690 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:29:58.813601 1459203 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:29:58.813703 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Building disk image from file:///home/jenkins/minikube-integration/17847-1442600/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I1225 12:29:58.813799 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Downloading /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17847-1442600/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1225 12:29:59.064484 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:29:59.064332 1459203 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885/id_rsa...
	I1225 12:29:59.379040 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:29:59.378908 1459203 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885/ingress-addon-legacy-441885.rawdisk...
	I1225 12:29:59.379079 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Writing magic tar header
	I1225 12:29:59.379092 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Writing SSH key tar header
	I1225 12:29:59.379102 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:29:59.379038 1459203 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885 ...
	I1225 12:29:59.379115 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885
	I1225 12:29:59.379201 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885 (perms=drwx------)
	I1225 12:29:59.379232 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines
	I1225 12:29:59.379241 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines (perms=drwxr-xr-x)
	I1225 12:29:59.379249 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:29:59.379261 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600
	I1225 12:29:59.379272 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1225 12:29:59.379279 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube (perms=drwxr-xr-x)
	I1225 12:29:59.379289 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600 (perms=drwxrwxr-x)
	I1225 12:29:59.379298 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1225 12:29:59.379308 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1225 12:29:59.379317 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Creating domain...
	I1225 12:29:59.379324 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Checking permissions on dir: /home/jenkins
	I1225 12:29:59.379342 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Checking permissions on dir: /home
	I1225 12:29:59.379357 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Skipping /home - not owner
	I1225 12:29:59.380632 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) define libvirt domain using xml: 
	I1225 12:29:59.380668 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) <domain type='kvm'>
	I1225 12:29:59.380683 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)   <name>ingress-addon-legacy-441885</name>
	I1225 12:29:59.380699 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)   <memory unit='MiB'>4096</memory>
	I1225 12:29:59.380705 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)   <vcpu>2</vcpu>
	I1225 12:29:59.380715 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)   <features>
	I1225 12:29:59.380721 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <acpi/>
	I1225 12:29:59.380731 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <apic/>
	I1225 12:29:59.380740 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <pae/>
	I1225 12:29:59.380757 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     
	I1225 12:29:59.380776 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)   </features>
	I1225 12:29:59.380789 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)   <cpu mode='host-passthrough'>
	I1225 12:29:59.380800 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)   
	I1225 12:29:59.380808 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)   </cpu>
	I1225 12:29:59.380814 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)   <os>
	I1225 12:29:59.380827 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <type>hvm</type>
	I1225 12:29:59.380842 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <boot dev='cdrom'/>
	I1225 12:29:59.380858 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <boot dev='hd'/>
	I1225 12:29:59.380873 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <bootmenu enable='no'/>
	I1225 12:29:59.380884 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)   </os>
	I1225 12:29:59.380897 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)   <devices>
	I1225 12:29:59.380906 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <disk type='file' device='cdrom'>
	I1225 12:29:59.380926 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <source file='/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885/boot2docker.iso'/>
	I1225 12:29:59.380950 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <target dev='hdc' bus='scsi'/>
	I1225 12:29:59.380961 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <readonly/>
	I1225 12:29:59.380972 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     </disk>
	I1225 12:29:59.380985 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <disk type='file' device='disk'>
	I1225 12:29:59.380997 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1225 12:29:59.381017 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <source file='/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885/ingress-addon-legacy-441885.rawdisk'/>
	I1225 12:29:59.381035 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <target dev='hda' bus='virtio'/>
	I1225 12:29:59.381048 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     </disk>
	I1225 12:29:59.381058 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <interface type='network'>
	I1225 12:29:59.381072 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <source network='mk-ingress-addon-legacy-441885'/>
	I1225 12:29:59.381080 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <model type='virtio'/>
	I1225 12:29:59.381091 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     </interface>
	I1225 12:29:59.381109 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <interface type='network'>
	I1225 12:29:59.381125 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <source network='default'/>
	I1225 12:29:59.381138 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <model type='virtio'/>
	I1225 12:29:59.381148 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     </interface>
	I1225 12:29:59.381158 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <serial type='pty'>
	I1225 12:29:59.381172 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <target port='0'/>
	I1225 12:29:59.381185 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     </serial>
	I1225 12:29:59.381198 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <console type='pty'>
	I1225 12:29:59.381213 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <target type='serial' port='0'/>
	I1225 12:29:59.381225 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     </console>
	I1225 12:29:59.381252 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     <rng model='virtio'>
	I1225 12:29:59.381280 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)       <backend model='random'>/dev/random</backend>
	I1225 12:29:59.381296 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     </rng>
	I1225 12:29:59.381310 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     
	I1225 12:29:59.381323 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)     
	I1225 12:29:59.381347 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885)   </devices>
	I1225 12:29:59.381383 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) </domain>
	I1225 12:29:59.381406 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) 
	I1225 12:29:59.385578 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:b9:cd:8f in network default
	I1225 12:29:59.386232 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Ensuring networks are active...
	I1225 12:29:59.386252 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:29:59.387410 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Ensuring network default is active
	I1225 12:29:59.387750 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Ensuring network mk-ingress-addon-legacy-441885 is active
	I1225 12:29:59.388388 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Getting domain xml...
	I1225 12:29:59.389085 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Creating domain...
	I1225 12:30:00.672046 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Waiting to get IP...
	I1225 12:30:00.672987 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:00.673431 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:00.673451 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:00.673412 1459203 retry.go:31] will retry after 250.571248ms: waiting for machine to come up
	I1225 12:30:00.925917 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:00.926334 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:00.926365 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:00.926282 1459203 retry.go:31] will retry after 311.237277ms: waiting for machine to come up
	I1225 12:30:01.239042 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:01.239474 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:01.239504 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:01.239449 1459203 retry.go:31] will retry after 322.092527ms: waiting for machine to come up
	I1225 12:30:01.563034 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:01.563596 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:01.563619 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:01.563544 1459203 retry.go:31] will retry after 382.425001ms: waiting for machine to come up
	I1225 12:30:01.947268 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:01.947805 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:01.947830 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:01.947749 1459203 retry.go:31] will retry after 501.780117ms: waiting for machine to come up
	I1225 12:30:02.451561 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:02.452049 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:02.452073 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:02.452003 1459203 retry.go:31] will retry after 950.841118ms: waiting for machine to come up
	I1225 12:30:03.404374 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:03.404744 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:03.404772 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:03.404701 1459203 retry.go:31] will retry after 903.99093ms: waiting for machine to come up
	I1225 12:30:04.310974 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:04.311532 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:04.311569 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:04.311389 1459203 retry.go:31] will retry after 1.23012898s: waiting for machine to come up
	I1225 12:30:05.543132 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:05.543571 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:05.543598 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:05.543508 1459203 retry.go:31] will retry after 1.801056142s: waiting for machine to come up
	I1225 12:30:07.347715 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:07.348221 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:07.348262 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:07.348141 1459203 retry.go:31] will retry after 2.054999471s: waiting for machine to come up
	I1225 12:30:09.405188 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:09.405631 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:09.405690 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:09.405593 1459203 retry.go:31] will retry after 2.742698508s: waiting for machine to come up
	I1225 12:30:12.151448 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:12.151856 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:12.151894 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:12.151794 1459203 retry.go:31] will retry after 2.416711485s: waiting for machine to come up
	I1225 12:30:14.569743 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:14.570178 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:14.570210 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:14.570126 1459203 retry.go:31] will retry after 4.028284836s: waiting for machine to come up
	I1225 12:30:18.603406 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:18.603759 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find current IP address of domain ingress-addon-legacy-441885 in network mk-ingress-addon-legacy-441885
	I1225 12:30:18.603788 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | I1225 12:30:18.603704 1459203 retry.go:31] will retry after 4.071226866s: waiting for machine to come up
	I1225 12:30:22.678738 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:22.679225 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Found IP for machine: 192.168.39.118
	I1225 12:30:22.679252 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Reserving static IP address...
	I1225 12:30:22.679269 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has current primary IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:22.679693 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-441885", mac: "52:54:00:44:f0:80", ip: "192.168.39.118"} in network mk-ingress-addon-legacy-441885
	I1225 12:30:22.767049 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Getting to WaitForSSH function...
	I1225 12:30:22.767099 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Reserved static IP address: 192.168.39.118
	I1225 12:30:22.767116 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Waiting for SSH to be available...
	I1225 12:30:22.769848 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:22.770306 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:22.770347 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:22.770542 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Using SSH client type: external
	I1225 12:30:22.770579 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885/id_rsa (-rw-------)
	I1225 12:30:22.770669 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 12:30:22.770694 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | About to run SSH command:
	I1225 12:30:22.770712 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | exit 0
	I1225 12:30:22.858507 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | SSH cmd err, output: <nil>: 
	I1225 12:30:22.858773 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) KVM machine creation complete!
	I1225 12:30:22.859121 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetConfigRaw
	I1225 12:30:22.859736 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .DriverName
	I1225 12:30:22.859927 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .DriverName
	I1225 12:30:22.860118 1459169 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1225 12:30:22.860153 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetState
	I1225 12:30:22.861358 1459169 main.go:141] libmachine: Detecting operating system of created instance...
	I1225 12:30:22.861376 1459169 main.go:141] libmachine: Waiting for SSH to be available...
	I1225 12:30:22.861383 1459169 main.go:141] libmachine: Getting to WaitForSSH function...
	I1225 12:30:22.861390 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:30:22.863885 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:22.864430 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:22.864471 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:22.864620 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHPort
	I1225 12:30:22.864800 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:22.864963 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:22.865121 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHUsername
	I1225 12:30:22.865311 1459169 main.go:141] libmachine: Using SSH client type: native
	I1225 12:30:22.865722 1459169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1225 12:30:22.865740 1459169 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1225 12:30:22.977825 1459169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 12:30:22.977860 1459169 main.go:141] libmachine: Detecting the provisioner...
	I1225 12:30:22.977870 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:30:22.980611 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:22.981019 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:22.981052 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:22.981200 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHPort
	I1225 12:30:22.981439 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:22.981614 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:22.981758 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHUsername
	I1225 12:30:22.981992 1459169 main.go:141] libmachine: Using SSH client type: native
	I1225 12:30:22.982315 1459169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1225 12:30:22.982327 1459169 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1225 12:30:23.099637 1459169 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1225 12:30:23.099753 1459169 main.go:141] libmachine: found compatible host: buildroot
	I1225 12:30:23.099773 1459169 main.go:141] libmachine: Provisioning with buildroot...
	I1225 12:30:23.099786 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetMachineName
	I1225 12:30:23.100060 1459169 buildroot.go:166] provisioning hostname "ingress-addon-legacy-441885"
	I1225 12:30:23.100093 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetMachineName
	I1225 12:30:23.100272 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:30:23.103205 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:23.103594 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:23.103627 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:23.103835 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHPort
	I1225 12:30:23.104016 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:23.104160 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:23.104334 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHUsername
	I1225 12:30:23.104520 1459169 main.go:141] libmachine: Using SSH client type: native
	I1225 12:30:23.104844 1459169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1225 12:30:23.104860 1459169 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-441885 && echo "ingress-addon-legacy-441885" | sudo tee /etc/hostname
	I1225 12:30:23.231417 1459169 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-441885
	
	I1225 12:30:23.231450 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:30:23.234357 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:23.234737 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:23.234817 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:23.234940 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHPort
	I1225 12:30:23.235153 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:23.235328 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:23.235508 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHUsername
	I1225 12:30:23.235699 1459169 main.go:141] libmachine: Using SSH client type: native
	I1225 12:30:23.236026 1459169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1225 12:30:23.236052 1459169 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-441885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-441885/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-441885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 12:30:23.359193 1459169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 12:30:23.359237 1459169 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 12:30:23.359291 1459169 buildroot.go:174] setting up certificates
	I1225 12:30:23.359305 1459169 provision.go:83] configureAuth start
	I1225 12:30:23.359320 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetMachineName
	I1225 12:30:23.359664 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetIP
	I1225 12:30:23.362394 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:23.362789 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:23.362825 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:23.362950 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:30:23.365003 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:23.365358 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:23.365389 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:23.365532 1459169 provision.go:138] copyHostCerts
	I1225 12:30:23.365571 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 12:30:23.365618 1459169 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 12:30:23.365629 1459169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 12:30:23.365694 1459169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 12:30:23.365818 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 12:30:23.365850 1459169 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 12:30:23.365859 1459169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 12:30:23.365891 1459169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 12:30:23.365947 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 12:30:23.365962 1459169 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 12:30:23.365966 1459169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 12:30:23.365988 1459169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 12:30:23.366036 1459169 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-441885 san=[192.168.39.118 192.168.39.118 localhost 127.0.0.1 minikube ingress-addon-legacy-441885]
	I1225 12:30:23.547199 1459169 provision.go:172] copyRemoteCerts
	I1225 12:30:23.547264 1459169 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 12:30:23.547306 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:30:23.550166 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:23.550560 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:23.550590 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:23.550789 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHPort
	I1225 12:30:23.551016 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:23.551212 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHUsername
	I1225 12:30:23.551351 1459169 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885/id_rsa Username:docker}
	I1225 12:30:23.635757 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1225 12:30:23.635833 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 12:30:23.660173 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1225 12:30:23.660272 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1225 12:30:23.683769 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1225 12:30:23.683865 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 12:30:23.708085 1459169 provision.go:86] duration metric: configureAuth took 348.746457ms
	I1225 12:30:23.708148 1459169 buildroot.go:189] setting minikube options for container-runtime
	I1225 12:30:23.708485 1459169 config.go:182] Loaded profile config "ingress-addon-legacy-441885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1225 12:30:23.708612 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:30:23.711506 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:23.711862 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:23.711891 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:23.712114 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHPort
	I1225 12:30:23.712360 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:23.712524 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:23.712736 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHUsername
	I1225 12:30:23.712907 1459169 main.go:141] libmachine: Using SSH client type: native
	I1225 12:30:23.713367 1459169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1225 12:30:23.713393 1459169 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 12:30:24.034678 1459169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 12:30:24.034719 1459169 main.go:141] libmachine: Checking connection to Docker...
	I1225 12:30:24.034732 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetURL
	I1225 12:30:24.036172 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Using libvirt version 6000000
	I1225 12:30:24.038109 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.038506 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:24.038534 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.038697 1459169 main.go:141] libmachine: Docker is up and running!
	I1225 12:30:24.038716 1459169 main.go:141] libmachine: Reticulating splines...
	I1225 12:30:24.038726 1459169 client.go:171] LocalClient.Create took 25.312689234s
	I1225 12:30:24.038763 1459169 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-441885" took 25.312767535s
	I1225 12:30:24.038777 1459169 start.go:300] post-start starting for "ingress-addon-legacy-441885" (driver="kvm2")
	I1225 12:30:24.038794 1459169 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 12:30:24.038818 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .DriverName
	I1225 12:30:24.039104 1459169 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 12:30:24.039127 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:30:24.041445 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.041710 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:24.041750 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.041836 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHPort
	I1225 12:30:24.042030 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:24.042217 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHUsername
	I1225 12:30:24.042383 1459169 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885/id_rsa Username:docker}
	I1225 12:30:24.128414 1459169 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 12:30:24.132765 1459169 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 12:30:24.132793 1459169 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 12:30:24.132875 1459169 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 12:30:24.132974 1459169 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 12:30:24.132992 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> /etc/ssl/certs/14497972.pem
	I1225 12:30:24.133112 1459169 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 12:30:24.142270 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 12:30:24.164650 1459169 start.go:303] post-start completed in 125.850616ms
	I1225 12:30:24.164716 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetConfigRaw
	I1225 12:30:24.165337 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetIP
	I1225 12:30:24.168141 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.168552 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:24.168587 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.168842 1459169 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/config.json ...
	I1225 12:30:24.169043 1459169 start.go:128] duration metric: createHost completed in 25.463298875s
	I1225 12:30:24.169068 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:30:24.171192 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.171513 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:24.171554 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.171639 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHPort
	I1225 12:30:24.171827 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:24.171972 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:24.172079 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHUsername
	I1225 12:30:24.172252 1459169 main.go:141] libmachine: Using SSH client type: native
	I1225 12:30:24.172565 1459169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1225 12:30:24.172577 1459169 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 12:30:24.287429 1459169 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703507424.268334141
	
	I1225 12:30:24.287454 1459169 fix.go:206] guest clock: 1703507424.268334141
	I1225 12:30:24.287464 1459169 fix.go:219] Guest: 2023-12-25 12:30:24.268334141 +0000 UTC Remote: 2023-12-25 12:30:24.169055443 +0000 UTC m=+29.619464085 (delta=99.278698ms)
	I1225 12:30:24.287518 1459169 fix.go:190] guest clock delta is within tolerance: 99.278698ms
	I1225 12:30:24.287531 1459169 start.go:83] releasing machines lock for "ingress-addon-legacy-441885", held for 25.58191883s
	I1225 12:30:24.287565 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .DriverName
	I1225 12:30:24.287863 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetIP
	I1225 12:30:24.290418 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.290782 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:24.290816 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.290961 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .DriverName
	I1225 12:30:24.291590 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .DriverName
	I1225 12:30:24.291806 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .DriverName
	I1225 12:30:24.291909 1459169 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 12:30:24.291952 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:30:24.292016 1459169 ssh_runner.go:195] Run: cat /version.json
	I1225 12:30:24.292044 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:30:24.294666 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.294982 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:24.295018 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.295044 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.295169 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHPort
	I1225 12:30:24.295354 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:24.295403 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:24.295436 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:24.295579 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHPort
	I1225 12:30:24.295601 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHUsername
	I1225 12:30:24.295758 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:30:24.295753 1459169 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885/id_rsa Username:docker}
	I1225 12:30:24.295897 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHUsername
	I1225 12:30:24.296033 1459169 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885/id_rsa Username:docker}
	I1225 12:30:24.388011 1459169 ssh_runner.go:195] Run: systemctl --version
	I1225 12:30:24.411662 1459169 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 12:30:24.572584 1459169 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 12:30:24.578645 1459169 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 12:30:24.578717 1459169 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 12:30:24.594169 1459169 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 12:30:24.594199 1459169 start.go:475] detecting cgroup driver to use...
	I1225 12:30:24.594290 1459169 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 12:30:24.608753 1459169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 12:30:24.621256 1459169 docker.go:203] disabling cri-docker service (if available) ...
	I1225 12:30:24.621325 1459169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 12:30:24.633773 1459169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 12:30:24.646466 1459169 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 12:30:24.747005 1459169 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 12:30:24.862051 1459169 docker.go:219] disabling docker service ...
	I1225 12:30:24.862121 1459169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 12:30:24.875730 1459169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 12:30:24.887880 1459169 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 12:30:24.988898 1459169 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 12:30:25.088424 1459169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 12:30:25.101111 1459169 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 12:30:25.119365 1459169 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1225 12:30:25.119452 1459169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:30:25.129467 1459169 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 12:30:25.129541 1459169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:30:25.139477 1459169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:30:25.149258 1459169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:30:25.158876 1459169 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 12:30:25.168527 1459169 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 12:30:25.176520 1459169 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 12:30:25.176585 1459169 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 12:30:25.188313 1459169 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 12:30:25.197492 1459169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 12:30:25.295165 1459169 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 12:30:25.465150 1459169 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 12:30:25.465237 1459169 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 12:30:25.470533 1459169 start.go:543] Will wait 60s for crictl version
	I1225 12:30:25.470611 1459169 ssh_runner.go:195] Run: which crictl
	I1225 12:30:25.474658 1459169 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 12:30:25.511933 1459169 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 12:30:25.512045 1459169 ssh_runner.go:195] Run: crio --version
	I1225 12:30:25.559319 1459169 ssh_runner.go:195] Run: crio --version
	I1225 12:30:25.611536 1459169 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1225 12:30:25.612885 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetIP
	I1225 12:30:25.615618 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:25.615933 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:30:25.615966 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:30:25.616178 1459169 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 12:30:25.620741 1459169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 12:30:25.635073 1459169 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1225 12:30:25.635188 1459169 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 12:30:25.679190 1459169 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1225 12:30:25.679295 1459169 ssh_runner.go:195] Run: which lz4
	I1225 12:30:25.683638 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1225 12:30:25.683756 1459169 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 12:30:25.688381 1459169 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 12:30:25.688419 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1225 12:30:27.639558 1459169 crio.go:444] Took 1.955828 seconds to copy over tarball
	I1225 12:30:27.639634 1459169 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 12:30:30.891605 1459169 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.251934734s)
	I1225 12:30:30.891645 1459169 crio.go:451] Took 3.252056 seconds to extract the tarball
	I1225 12:30:30.891656 1459169 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 12:30:30.935781 1459169 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 12:30:30.991898 1459169 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1225 12:30:30.991948 1459169 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1225 12:30:30.992012 1459169 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 12:30:30.992049 1459169 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1225 12:30:30.992074 1459169 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1225 12:30:30.992138 1459169 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1225 12:30:30.992049 1459169 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1225 12:30:30.992079 1459169 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1225 12:30:30.992147 1459169 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1225 12:30:30.992171 1459169 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1225 12:30:30.993665 1459169 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 12:30:30.993676 1459169 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1225 12:30:30.993688 1459169 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1225 12:30:30.993695 1459169 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1225 12:30:30.993702 1459169 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1225 12:30:30.993668 1459169 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1225 12:30:30.993670 1459169 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1225 12:30:30.993678 1459169 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1225 12:30:31.146685 1459169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1225 12:30:31.153970 1459169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1225 12:30:31.160038 1459169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1225 12:30:31.182272 1459169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1225 12:30:31.187083 1459169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1225 12:30:31.204791 1459169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1225 12:30:31.204872 1459169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1225 12:30:31.234571 1459169 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1225 12:30:31.234631 1459169 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1225 12:30:31.234688 1459169 ssh_runner.go:195] Run: which crictl
	I1225 12:30:31.237364 1459169 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1225 12:30:31.237412 1459169 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1225 12:30:31.237461 1459169 ssh_runner.go:195] Run: which crictl
	I1225 12:30:31.279654 1459169 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1225 12:30:31.279722 1459169 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1225 12:30:31.279775 1459169 ssh_runner.go:195] Run: which crictl
	I1225 12:30:31.295214 1459169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 12:30:31.367142 1459169 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1225 12:30:31.367203 1459169 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1225 12:30:31.367241 1459169 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1225 12:30:31.367262 1459169 ssh_runner.go:195] Run: which crictl
	I1225 12:30:31.367270 1459169 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1225 12:30:31.367306 1459169 ssh_runner.go:195] Run: which crictl
	I1225 12:30:31.372958 1459169 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1225 12:30:31.373008 1459169 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1225 12:30:31.373049 1459169 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1225 12:30:31.373084 1459169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1225 12:30:31.373102 1459169 ssh_runner.go:195] Run: which crictl
	I1225 12:30:31.373018 1459169 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1225 12:30:31.373150 1459169 ssh_runner.go:195] Run: which crictl
	I1225 12:30:31.373205 1459169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1225 12:30:31.373318 1459169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1225 12:30:31.469308 1459169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1225 12:30:31.469349 1459169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1225 12:30:31.469377 1459169 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1225 12:30:31.469485 1459169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1225 12:30:31.469506 1459169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1225 12:30:31.469547 1459169 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1225 12:30:31.469604 1459169 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1225 12:30:31.536104 1459169 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1225 12:30:31.552917 1459169 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1225 12:30:31.553029 1459169 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1225 12:30:31.555712 1459169 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1225 12:30:31.555781 1459169 cache_images.go:92] LoadImages completed in 563.817635ms
	W1225 12:30:31.555850 1459169 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1225 12:30:31.555930 1459169 ssh_runner.go:195] Run: crio config
	I1225 12:30:31.624976 1459169 cni.go:84] Creating CNI manager for ""
	I1225 12:30:31.625000 1459169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 12:30:31.625019 1459169 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 12:30:31.625043 1459169 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-441885 NodeName:ingress-addon-legacy-441885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1225 12:30:31.625184 1459169 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-441885"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 12:30:31.625261 1459169 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-441885 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-441885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 12:30:31.625314 1459169 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1225 12:30:31.634328 1459169 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 12:30:31.634412 1459169 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 12:30:31.643161 1459169 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I1225 12:30:31.659707 1459169 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1225 12:30:31.676013 1459169 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I1225 12:30:31.691995 1459169 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I1225 12:30:31.696003 1459169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 12:30:31.710100 1459169 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885 for IP: 192.168.39.118
	I1225 12:30:31.710138 1459169 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:30:31.710289 1459169 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 12:30:31.710329 1459169 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 12:30:31.710379 1459169 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.key
	I1225 12:30:31.710401 1459169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt with IP's: []
	I1225 12:30:31.811107 1459169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt ...
	I1225 12:30:31.811146 1459169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: {Name:mk551ec83a4c180e2f8e88b6c9c4f93a85eda509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:30:31.811357 1459169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.key ...
	I1225 12:30:31.811378 1459169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.key: {Name:mkf0f77fda9e769fee0cb77814329d3eb3a56bdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:30:31.811481 1459169 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.key.ee260ba9
	I1225 12:30:31.811504 1459169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.crt.ee260ba9 with IP's: [192.168.39.118 10.96.0.1 127.0.0.1 10.0.0.1]
	I1225 12:30:32.081266 1459169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.crt.ee260ba9 ...
	I1225 12:30:32.081305 1459169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.crt.ee260ba9: {Name:mkda756260226974cdcb9b0a4e228b1a43232dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:30:32.081486 1459169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.key.ee260ba9 ...
	I1225 12:30:32.081503 1459169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.key.ee260ba9: {Name:mkbe9695b7f5415e717454d725d10645a358c70f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:30:32.081605 1459169 certs.go:337] copying /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.crt.ee260ba9 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.crt
	I1225 12:30:32.081689 1459169 certs.go:341] copying /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.key.ee260ba9 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.key
	I1225 12:30:32.081751 1459169 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/proxy-client.key
	I1225 12:30:32.081775 1459169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/proxy-client.crt with IP's: []
	I1225 12:30:32.377798 1459169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/proxy-client.crt ...
	I1225 12:30:32.377841 1459169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/proxy-client.crt: {Name:mk827a3be068131b785c2f912201ff021f351abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:30:32.378045 1459169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/proxy-client.key ...
	I1225 12:30:32.378067 1459169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/proxy-client.key: {Name:mkb788e535e58e4d8a1c32970333a97c568aa108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:30:32.378172 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1225 12:30:32.378199 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1225 12:30:32.378217 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1225 12:30:32.378243 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1225 12:30:32.378256 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1225 12:30:32.378277 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1225 12:30:32.378300 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1225 12:30:32.378318 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1225 12:30:32.378386 1459169 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 12:30:32.378456 1459169 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 12:30:32.378474 1459169 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 12:30:32.378509 1459169 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 12:30:32.378545 1459169 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 12:30:32.378586 1459169 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 12:30:32.378645 1459169 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 12:30:32.378682 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> /usr/share/ca-certificates/14497972.pem
	I1225 12:30:32.378703 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:30:32.378721 1459169 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem -> /usr/share/ca-certificates/1449797.pem
	I1225 12:30:32.379437 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 12:30:32.406549 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 12:30:32.431136 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 12:30:32.454444 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 12:30:32.478212 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 12:30:32.501523 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 12:30:32.525283 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 12:30:32.549581 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 12:30:32.572979 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 12:30:32.597249 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 12:30:32.621100 1459169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 12:30:32.646106 1459169 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 12:30:32.664616 1459169 ssh_runner.go:195] Run: openssl version
	I1225 12:30:32.670746 1459169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 12:30:32.681116 1459169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:30:32.686363 1459169 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:30:32.686473 1459169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:30:32.692483 1459169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 12:30:32.703127 1459169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 12:30:32.713285 1459169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 12:30:32.718540 1459169 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 12:30:32.718618 1459169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 12:30:32.724728 1459169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 12:30:32.734236 1459169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 12:30:32.743613 1459169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 12:30:32.748266 1459169 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 12:30:32.748348 1459169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 12:30:32.753432 1459169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 12:30:32.763100 1459169 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 12:30:32.767252 1459169 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1225 12:30:32.767321 1459169 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-441885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-441885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:30:32.767413 1459169 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 12:30:32.767492 1459169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 12:30:32.804671 1459169 cri.go:89] found id: ""
	I1225 12:30:32.804761 1459169 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 12:30:32.813256 1459169 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 12:30:32.821573 1459169 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 12:30:32.829875 1459169 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 12:30:32.829927 1459169 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1225 12:30:32.885995 1459169 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1225 12:30:32.886274 1459169 kubeadm.go:322] [preflight] Running pre-flight checks
	I1225 12:30:33.027069 1459169 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 12:30:33.027200 1459169 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 12:30:33.027379 1459169 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1225 12:30:33.263305 1459169 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 12:30:33.264336 1459169 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 12:30:33.264404 1459169 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1225 12:30:33.398477 1459169 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 12:30:33.480120 1459169 out.go:204]   - Generating certificates and keys ...
	I1225 12:30:33.480238 1459169 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1225 12:30:33.480318 1459169 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1225 12:30:33.571147 1459169 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1225 12:30:33.676954 1459169 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1225 12:30:33.897376 1459169 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1225 12:30:33.997821 1459169 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1225 12:30:34.131906 1459169 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1225 12:30:34.132107 1459169 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-441885 localhost] and IPs [192.168.39.118 127.0.0.1 ::1]
	I1225 12:30:34.551531 1459169 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1225 12:30:34.551890 1459169 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-441885 localhost] and IPs [192.168.39.118 127.0.0.1 ::1]
	I1225 12:30:34.673161 1459169 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1225 12:30:34.742996 1459169 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1225 12:30:34.906981 1459169 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1225 12:30:34.907202 1459169 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 12:30:35.095499 1459169 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 12:30:35.205627 1459169 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 12:30:35.295400 1459169 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 12:30:35.640485 1459169 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 12:30:35.641070 1459169 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 12:30:35.642958 1459169 out.go:204]   - Booting up control plane ...
	I1225 12:30:35.643077 1459169 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 12:30:35.648801 1459169 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 12:30:35.649689 1459169 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 12:30:35.650510 1459169 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 12:30:35.652594 1459169 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1225 12:30:45.154355 1459169 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503511 seconds
	I1225 12:30:45.154529 1459169 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 12:30:45.172709 1459169 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 12:30:45.702352 1459169 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 12:30:45.702554 1459169 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-441885 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1225 12:30:46.214180 1459169 kubeadm.go:322] [bootstrap-token] Using token: g7qoli.kwox8no6c9osw4zn
	I1225 12:30:46.215633 1459169 out.go:204]   - Configuring RBAC rules ...
	I1225 12:30:46.215795 1459169 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 12:30:46.221504 1459169 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 12:30:46.235309 1459169 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 12:30:46.240304 1459169 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 12:30:46.243943 1459169 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 12:30:46.247975 1459169 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 12:30:46.259476 1459169 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 12:30:46.570325 1459169 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1225 12:30:46.647052 1459169 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1225 12:30:46.648916 1459169 kubeadm.go:322] 
	I1225 12:30:46.648984 1459169 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1225 12:30:46.649019 1459169 kubeadm.go:322] 
	I1225 12:30:46.649117 1459169 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1225 12:30:46.649126 1459169 kubeadm.go:322] 
	I1225 12:30:46.649168 1459169 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1225 12:30:46.651940 1459169 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 12:30:46.652028 1459169 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 12:30:46.652045 1459169 kubeadm.go:322] 
	I1225 12:30:46.652100 1459169 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1225 12:30:46.652259 1459169 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 12:30:46.652359 1459169 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 12:30:46.652371 1459169 kubeadm.go:322] 
	I1225 12:30:46.652484 1459169 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 12:30:46.652623 1459169 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1225 12:30:46.652641 1459169 kubeadm.go:322] 
	I1225 12:30:46.652757 1459169 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g7qoli.kwox8no6c9osw4zn \
	I1225 12:30:46.652894 1459169 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 \
	I1225 12:30:46.652944 1459169 kubeadm.go:322]     --control-plane 
	I1225 12:30:46.652959 1459169 kubeadm.go:322] 
	I1225 12:30:46.653078 1459169 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1225 12:30:46.653087 1459169 kubeadm.go:322] 
	I1225 12:30:46.653198 1459169 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g7qoli.kwox8no6c9osw4zn \
	I1225 12:30:46.653341 1459169 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 12:30:46.654679 1459169 kubeadm.go:322] W1225 12:30:32.877955     961 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1225 12:30:46.654833 1459169 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 12:30:46.654988 1459169 kubeadm.go:322] W1225 12:30:35.642414     961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1225 12:30:46.655179 1459169 kubeadm.go:322] W1225 12:30:35.643473     961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1225 12:30:46.655192 1459169 cni.go:84] Creating CNI manager for ""
	I1225 12:30:46.655200 1459169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 12:30:46.656916 1459169 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 12:30:46.658090 1459169 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 12:30:46.669571 1459169 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 12:30:46.687978 1459169 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 12:30:46.688041 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:46.688066 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da minikube.k8s.io/name=ingress-addon-legacy-441885 minikube.k8s.io/updated_at=2023_12_25T12_30_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:46.907492 1459169 ops.go:34] apiserver oom_adj: -16
	I1225 12:30:46.907529 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:47.408572 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:47.907628 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:48.408366 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:48.908213 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:49.408496 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:49.908374 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:50.408062 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:50.908286 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:51.407915 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:51.907959 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:52.407871 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:52.908481 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:53.407837 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:53.908583 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:54.407762 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:54.907742 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:55.408273 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:55.907942 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:56.408442 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:56.907731 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:57.407653 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:57.908460 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:58.408164 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:58.908117 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:59.407747 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:30:59.907630 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:31:00.408359 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:31:00.908222 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:31:01.408593 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:31:01.908325 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:31:02.407842 1459169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:31:02.533558 1459169 kubeadm.go:1088] duration metric: took 15.845581538s to wait for elevateKubeSystemPrivileges.
	I1225 12:31:02.533604 1459169 kubeadm.go:406] StartCluster complete in 29.766286267s
	I1225 12:31:02.533646 1459169 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:31:02.533747 1459169 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:31:02.534874 1459169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:31:02.535166 1459169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 12:31:02.535349 1459169 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 12:31:02.535434 1459169 config.go:182] Loaded profile config "ingress-addon-legacy-441885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1225 12:31:02.535440 1459169 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-441885"
	I1225 12:31:02.535466 1459169 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-441885"
	I1225 12:31:02.535474 1459169 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-441885"
	I1225 12:31:02.535496 1459169 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-441885"
	I1225 12:31:02.535538 1459169 host.go:66] Checking if "ingress-addon-legacy-441885" exists ...
	I1225 12:31:02.535903 1459169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:31:02.535943 1459169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:31:02.536035 1459169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:31:02.536062 1459169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:31:02.535964 1459169 kapi.go:59] client config for ingress-addon-legacy-441885: &rest.Config{Host:"https://192.168.39.118:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData
:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:31:02.536807 1459169 cert_rotation.go:137] Starting client certificate rotation controller
	I1225 12:31:02.552849 1459169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42537
	I1225 12:31:02.552913 1459169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45899
	I1225 12:31:02.553361 1459169 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:31:02.553421 1459169 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:31:02.553948 1459169 main.go:141] libmachine: Using API Version  1
	I1225 12:31:02.553971 1459169 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:31:02.554057 1459169 main.go:141] libmachine: Using API Version  1
	I1225 12:31:02.554083 1459169 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:31:02.554387 1459169 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:31:02.554428 1459169 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:31:02.554631 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetState
	I1225 12:31:02.555027 1459169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:31:02.555063 1459169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:31:02.557313 1459169 kapi.go:59] client config for ingress-addon-legacy-441885: &rest.Config{Host:"https://192.168.39.118:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData
:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:31:02.557690 1459169 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-441885"
	I1225 12:31:02.557743 1459169 host.go:66] Checking if "ingress-addon-legacy-441885" exists ...
	I1225 12:31:02.558166 1459169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:31:02.558204 1459169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:31:02.570868 1459169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I1225 12:31:02.571389 1459169 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:31:02.571890 1459169 main.go:141] libmachine: Using API Version  1
	I1225 12:31:02.571914 1459169 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:31:02.572331 1459169 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:31:02.572552 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetState
	I1225 12:31:02.574536 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .DriverName
	I1225 12:31:02.576545 1459169 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 12:31:02.577956 1459169 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 12:31:02.577978 1459169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 12:31:02.578004 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:31:02.578145 1459169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36977
	I1225 12:31:02.578595 1459169 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:31:02.579141 1459169 main.go:141] libmachine: Using API Version  1
	I1225 12:31:02.579168 1459169 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:31:02.579567 1459169 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:31:02.580194 1459169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:31:02.580232 1459169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:31:02.581390 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:31:02.581887 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:31:02.581921 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:31:02.582040 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHPort
	I1225 12:31:02.582252 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:31:02.582419 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHUsername
	I1225 12:31:02.582614 1459169 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885/id_rsa Username:docker}
	I1225 12:31:02.596402 1459169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44537
	I1225 12:31:02.596952 1459169 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:31:02.597499 1459169 main.go:141] libmachine: Using API Version  1
	I1225 12:31:02.597519 1459169 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:31:02.597865 1459169 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:31:02.598027 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetState
	I1225 12:31:02.599738 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .DriverName
	I1225 12:31:02.600054 1459169 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 12:31:02.600070 1459169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 12:31:02.600086 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHHostname
	I1225 12:31:02.602895 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:31:02.603418 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:f0:80", ip: ""} in network mk-ingress-addon-legacy-441885: {Iface:virbr1 ExpiryTime:2023-12-25 13:30:15 +0000 UTC Type:0 Mac:52:54:00:44:f0:80 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-441885 Clientid:01:52:54:00:44:f0:80}
	I1225 12:31:02.603438 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | domain ingress-addon-legacy-441885 has defined IP address 192.168.39.118 and MAC address 52:54:00:44:f0:80 in network mk-ingress-addon-legacy-441885
	I1225 12:31:02.603667 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHPort
	I1225 12:31:02.603904 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHKeyPath
	I1225 12:31:02.604041 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .GetSSHUsername
	I1225 12:31:02.604138 1459169 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/ingress-addon-legacy-441885/id_rsa Username:docker}
	I1225 12:31:02.771319 1459169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 12:31:02.792930 1459169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 12:31:02.806032 1459169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 12:31:03.042560 1459169 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-441885" context rescaled to 1 replicas
	I1225 12:31:03.042610 1459169 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 12:31:03.044926 1459169 out.go:177] * Verifying Kubernetes components...
	I1225 12:31:03.046951 1459169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:31:03.359587 1459169 main.go:141] libmachine: Making call to close driver server
	I1225 12:31:03.359628 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .Close
	I1225 12:31:03.359954 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Closing plugin on server side
	I1225 12:31:03.360025 1459169 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:31:03.360059 1459169 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:31:03.360077 1459169 main.go:141] libmachine: Making call to close driver server
	I1225 12:31:03.360090 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .Close
	I1225 12:31:03.360335 1459169 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:31:03.360353 1459169 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:31:03.360388 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) DBG | Closing plugin on server side
	I1225 12:31:03.393879 1459169 main.go:141] libmachine: Making call to close driver server
	I1225 12:31:03.393919 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .Close
	I1225 12:31:03.394232 1459169 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:31:03.394285 1459169 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:31:03.443176 1459169 main.go:141] libmachine: Making call to close driver server
	I1225 12:31:03.443212 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .Close
	I1225 12:31:03.443265 1459169 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1225 12:31:03.443562 1459169 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:31:03.443580 1459169 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:31:03.443590 1459169 main.go:141] libmachine: Making call to close driver server
	I1225 12:31:03.443599 1459169 main.go:141] libmachine: (ingress-addon-legacy-441885) Calling .Close
	I1225 12:31:03.443989 1459169 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:31:03.444009 1459169 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:31:03.446300 1459169 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1225 12:31:03.444165 1459169 kapi.go:59] client config for ingress-addon-legacy-441885: &rest.Config{Host:"https://192.168.39.118:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData
:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:31:03.448241 1459169 addons.go:508] enable addons completed in 912.891101ms: enabled=[default-storageclass storage-provisioner]
	I1225 12:31:03.448432 1459169 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-441885" to be "Ready" ...
	I1225 12:31:03.478166 1459169 node_ready.go:49] node "ingress-addon-legacy-441885" has status "Ready":"True"
	I1225 12:31:03.478193 1459169 node_ready.go:38] duration metric: took 29.740213ms waiting for node "ingress-addon-legacy-441885" to be "Ready" ...
	I1225 12:31:03.478204 1459169 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:31:03.490640 1459169 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-zh7mg" in "kube-system" namespace to be "Ready" ...
	I1225 12:31:05.498404 1459169 pod_ready.go:102] pod "coredns-66bff467f8-zh7mg" in "kube-system" namespace has status "Ready":"False"
	I1225 12:31:07.998279 1459169 pod_ready.go:102] pod "coredns-66bff467f8-zh7mg" in "kube-system" namespace has status "Ready":"False"
	I1225 12:31:10.498354 1459169 pod_ready.go:102] pod "coredns-66bff467f8-zh7mg" in "kube-system" namespace has status "Ready":"False"
	I1225 12:31:11.509844 1459169 pod_ready.go:92] pod "coredns-66bff467f8-zh7mg" in "kube-system" namespace has status "Ready":"True"
	I1225 12:31:11.509875 1459169 pod_ready.go:81] duration metric: took 8.019203034s waiting for pod "coredns-66bff467f8-zh7mg" in "kube-system" namespace to be "Ready" ...
	I1225 12:31:11.509886 1459169 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-441885" in "kube-system" namespace to be "Ready" ...
	I1225 12:31:11.516518 1459169 pod_ready.go:92] pod "etcd-ingress-addon-legacy-441885" in "kube-system" namespace has status "Ready":"True"
	I1225 12:31:11.516549 1459169 pod_ready.go:81] duration metric: took 6.655035ms waiting for pod "etcd-ingress-addon-legacy-441885" in "kube-system" namespace to be "Ready" ...
	I1225 12:31:11.516560 1459169 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-441885" in "kube-system" namespace to be "Ready" ...
	I1225 12:31:11.521260 1459169 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-441885" in "kube-system" namespace has status "Ready":"True"
	I1225 12:31:11.521291 1459169 pod_ready.go:81] duration metric: took 4.723866ms waiting for pod "kube-apiserver-ingress-addon-legacy-441885" in "kube-system" namespace to be "Ready" ...
	I1225 12:31:11.521304 1459169 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-441885" in "kube-system" namespace to be "Ready" ...
	I1225 12:31:11.525698 1459169 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-441885" in "kube-system" namespace has status "Ready":"True"
	I1225 12:31:11.525723 1459169 pod_ready.go:81] duration metric: took 4.409388ms waiting for pod "kube-controller-manager-ingress-addon-legacy-441885" in "kube-system" namespace to be "Ready" ...
	I1225 12:31:11.525735 1459169 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6wjzf" in "kube-system" namespace to be "Ready" ...
	I1225 12:31:11.530611 1459169 pod_ready.go:92] pod "kube-proxy-6wjzf" in "kube-system" namespace has status "Ready":"True"
	I1225 12:31:11.530640 1459169 pod_ready.go:81] duration metric: took 4.890795ms waiting for pod "kube-proxy-6wjzf" in "kube-system" namespace to be "Ready" ...
	I1225 12:31:11.530649 1459169 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-441885" in "kube-system" namespace to be "Ready" ...
	I1225 12:31:11.691043 1459169 request.go:629] Waited for 160.297199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.118:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-441885
	I1225 12:31:11.892068 1459169 request.go:629] Waited for 197.40061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.118:8443/api/v1/nodes/ingress-addon-legacy-441885
	I1225 12:31:11.895965 1459169 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-441885" in "kube-system" namespace has status "Ready":"True"
	I1225 12:31:11.895991 1459169 pod_ready.go:81] duration metric: took 365.333364ms waiting for pod "kube-scheduler-ingress-addon-legacy-441885" in "kube-system" namespace to be "Ready" ...
	I1225 12:31:11.896004 1459169 pod_ready.go:38] duration metric: took 8.41778814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:31:11.896021 1459169 api_server.go:52] waiting for apiserver process to appear ...
	I1225 12:31:11.896077 1459169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 12:31:11.909634 1459169 api_server.go:72] duration metric: took 8.866990712s to wait for apiserver process to appear ...
	I1225 12:31:11.909660 1459169 api_server.go:88] waiting for apiserver healthz status ...
	I1225 12:31:11.909679 1459169 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I1225 12:31:11.915673 1459169 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I1225 12:31:11.916893 1459169 api_server.go:141] control plane version: v1.18.20
	I1225 12:31:11.916917 1459169 api_server.go:131] duration metric: took 7.251081ms to wait for apiserver health ...
	I1225 12:31:11.916925 1459169 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 12:31:12.091376 1459169 request.go:629] Waited for 174.376847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.118:8443/api/v1/namespaces/kube-system/pods
	I1225 12:31:12.097706 1459169 system_pods.go:59] 7 kube-system pods found
	I1225 12:31:12.097741 1459169 system_pods.go:61] "coredns-66bff467f8-zh7mg" [7adf4a2a-aca0-4902-8909-16d008ef31e5] Running
	I1225 12:31:12.097746 1459169 system_pods.go:61] "etcd-ingress-addon-legacy-441885" [0d79ff98-c4c8-4717-8d2d-6aead7152cd0] Running
	I1225 12:31:12.097750 1459169 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-441885" [86377130-e524-4d17-bdfd-c20778d59482] Running
	I1225 12:31:12.097754 1459169 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-441885" [5669fb11-dd1d-4b71-a054-e8c4e5a4fe06] Running
	I1225 12:31:12.097758 1459169 system_pods.go:61] "kube-proxy-6wjzf" [d09b76ea-4389-4633-9f53-291e249238c6] Running
	I1225 12:31:12.097762 1459169 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-441885" [b6a02fdb-ecb6-4031-9f9d-c91eaa037c16] Running
	I1225 12:31:12.097766 1459169 system_pods.go:61] "storage-provisioner" [75392310-d20f-40a7-b547-25da6bc472bf] Running
	I1225 12:31:12.097777 1459169 system_pods.go:74] duration metric: took 180.841718ms to wait for pod list to return data ...
	I1225 12:31:12.097786 1459169 default_sa.go:34] waiting for default service account to be created ...
	I1225 12:31:12.291196 1459169 request.go:629] Waited for 193.31001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.118:8443/api/v1/namespaces/default/serviceaccounts
	I1225 12:31:12.294692 1459169 default_sa.go:45] found service account: "default"
	I1225 12:31:12.294726 1459169 default_sa.go:55] duration metric: took 196.93182ms for default service account to be created ...
	I1225 12:31:12.294737 1459169 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 12:31:12.491116 1459169 request.go:629] Waited for 196.305886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.118:8443/api/v1/namespaces/kube-system/pods
	I1225 12:31:12.496927 1459169 system_pods.go:86] 7 kube-system pods found
	I1225 12:31:12.496961 1459169 system_pods.go:89] "coredns-66bff467f8-zh7mg" [7adf4a2a-aca0-4902-8909-16d008ef31e5] Running
	I1225 12:31:12.496967 1459169 system_pods.go:89] "etcd-ingress-addon-legacy-441885" [0d79ff98-c4c8-4717-8d2d-6aead7152cd0] Running
	I1225 12:31:12.496971 1459169 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-441885" [86377130-e524-4d17-bdfd-c20778d59482] Running
	I1225 12:31:12.496975 1459169 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-441885" [5669fb11-dd1d-4b71-a054-e8c4e5a4fe06] Running
	I1225 12:31:12.496979 1459169 system_pods.go:89] "kube-proxy-6wjzf" [d09b76ea-4389-4633-9f53-291e249238c6] Running
	I1225 12:31:12.496982 1459169 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-441885" [b6a02fdb-ecb6-4031-9f9d-c91eaa037c16] Running
	I1225 12:31:12.496991 1459169 system_pods.go:89] "storage-provisioner" [75392310-d20f-40a7-b547-25da6bc472bf] Running
	I1225 12:31:12.496999 1459169 system_pods.go:126] duration metric: took 202.256248ms to wait for k8s-apps to be running ...
	I1225 12:31:12.497006 1459169 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 12:31:12.497054 1459169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:31:12.512507 1459169 system_svc.go:56] duration metric: took 15.489309ms WaitForService to wait for kubelet.
	I1225 12:31:12.512535 1459169 kubeadm.go:581] duration metric: took 9.469899385s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 12:31:12.512561 1459169 node_conditions.go:102] verifying NodePressure condition ...
	I1225 12:31:12.691602 1459169 request.go:629] Waited for 178.958149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.118:8443/api/v1/nodes
	I1225 12:31:12.694986 1459169 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:31:12.695032 1459169 node_conditions.go:123] node cpu capacity is 2
	I1225 12:31:12.695047 1459169 node_conditions.go:105] duration metric: took 182.479775ms to run NodePressure ...
	I1225 12:31:12.695061 1459169 start.go:228] waiting for startup goroutines ...
	I1225 12:31:12.695072 1459169 start.go:233] waiting for cluster config update ...
	I1225 12:31:12.695082 1459169 start.go:242] writing updated cluster config ...
	I1225 12:31:12.695450 1459169 ssh_runner.go:195] Run: rm -f paused
	I1225 12:31:12.745659 1459169 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I1225 12:31:12.747936 1459169 out.go:177] 
	W1225 12:31:12.749775 1459169 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I1225 12:31:12.751480 1459169 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1225 12:31:12.752978 1459169 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-441885" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2023-12-25 12:30:11 UTC, ends at Mon 2023-12-25 12:34:25 UTC. --
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.499566928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f657b3f7-7219-4c9e-aa5e-c4a5fcc89636 name=/runtime.v1.RuntimeService/Version
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.500937947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9c5413e9-564c-48a8-a491-d52a46043bdd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.501542449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703507665501526031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=9c5413e9-564c-48a8-a491-d52a46043bdd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.502173365Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1c9ad2d0-2d01-4887-a849-4dfd5c5e9f47 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.502247041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1c9ad2d0-2d01-4887-a849-4dfd5c5e9f47 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.502479556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8086997fb9998410ccd34b06ff31e502aa7ea816fcd05a368ca671cb94670aa,PodSandboxId:8f0f9b95ebdf6c095801d3a34767681b5ef2b45328a7c9d1c0cf2173638da4a2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1703507649069261131,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-qw8tb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6748cea-db47-48d2-9fa9-173805bcdf12,},Annotations:map[string]string{io.kubernetes.container.hash: ba18a2ca,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c371f5088ad4556a4387919989bc2fefd54ad77ddd63fb1603be95bf8f7093e0,PodSandboxId:1190e28b24f03e9da0793ff4c30062f9cda7c1fe807301293e27a87112c78d74,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1703507507592559840,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d52d7f6-a780-4dc5-94e1-b327e6f9d7a4,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 1724ba2e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05716ba53a540092f000b00f5a88e81567fcdf4322c5b0f5efd4ce79cb3e1532,PodSandboxId:686ffbdc4c67c8d4cc32ac10df75e815a00aa99f79468fb5bb85f192e8ff5d79,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1703507485220052617,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-2vxdt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a73b346b-c588-4a66-a9ec-f0ea7d80c86d,},Annotations:map[string]string{io.kubernetes.container.hash: aa0320be,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b7b9cb440cbde9ac5a7608d8c94b2b6a104dfad37be2f6cf0a9c0e9afe9bbeb1,PodSandboxId:434f843ed51069a68f9199ed0a53cf0adbbfca930e1e92d494452678f315308d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1703507476065240843,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wh4lr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9f1d93b4-f44a-486c-844c-419f4a8a6606,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7d498f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a871eb25a78e67a546281d95d873312b4bbd2ba6d991586f457a6e7b97ec,PodSandboxId:c1709c0f61ca8102b79f59ba8e6a91adf7f39f4f5ea1777901d7845ff2a344ae,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1703507475760162376,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cpxcj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5596e8a9-395d-483c-9b0d-b9988cf44d4b,},Annotations:map[string]string{io.kubernetes.container.hash: a88673ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e58499087f7275de94798bb7709fa498f55441a78c78c2953c1149c409c68a,PodSandboxId:3a0adad53e190b40fc51d5f36002834fad34a9bfc9af21fa2d45edd33088e58b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1703507466047566594,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-zh7mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4a2a-aca0-4902-8909-16d008ef31e5,},Annotations:map[string]string{io.kubernetes.container.hash: f3a66031,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2c120f8c63297ea5920172c10
4d90898a431f7a896cb99cb5e5eea6b0eb9cd,PodSandboxId:73a4956b61f733bd8c6e7d982d5b0de9b50f3836c1bb6a336bff83832a2a00c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703507464183122770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75392310-d20f-40a7-b547-25da6bc472bf,},Annotations:map[string]string{io.kubernetes.container.hash: 94f3c7b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27828ea2b5deeb71219a189e2d09
cd62abe2a94db05840dac130494e88b24041,PodSandboxId:4ee5215d5a5361893ecbee21bc5dd3ab9a3fe36106dc8a47e0f21488a25fcfbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1703507463659319251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6wjzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09b76ea-4389-4633-9f53-291e249238c6,},Annotations:map[string]string{io.kubernetes.container.hash: 58d0396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46cedb288e21414355cb45d81fb0159466572cfeaea6454557f4dc11fe324b2d,PodS
andboxId:1e5bca2044ada1ae25795057db39622aa2dd9b6b7c91cd21fdf50e60d2e540d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1703507439277947354,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff8e8d9dbc1b938a9886b7ea938604,},Annotations:map[string]string{io.kubernetes.container.hash: 7ec4a67f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9995ba285f505048697f7b8ec49d201469ce8765a5702a0c943bc4a7abab0a7,PodSandboxId:4eb06455b181ca55be4851c6d7f4fa6d6002c
b9a6c56034602b290538d86f934,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1703507438129691218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557108d0824c209762d74d1fb6913635,},Annotations:map[string]string{io.kubernetes.container.hash: 34729005,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c8b403e9d73da9209267223b49334969c793e59fd721d8541fb6fe0dd2398f,PodSandboxId:5453067a1167e4a0542fd58b48843954ba13cd9ab73
7bd910c427475991ae50c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1703507437914075457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9de4b662731d786b978e4b4205b0bd0a7ce08420dbabb6f5e44b32d560e374,PodSandboxId:884232cdcfc9c
774e0cb17e136a38e460ea4ba6f041e341472b85f0f02a2bf54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1703507437955151529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1c9ad2d0-2d01-4887-a849-4dfd5c5e9f47 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.541728543Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=16c1968d-bcb0-4fce-b692-350fcfcf07ca name=/runtime.v1.RuntimeService/Version
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.541810401Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=16c1968d-bcb0-4fce-b692-350fcfcf07ca name=/runtime.v1.RuntimeService/Version
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.543613481Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=586b60db-36d3-4f78-840f-ab7d097ffbd8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.544203622Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703507665544186943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=586b60db-36d3-4f78-840f-ab7d097ffbd8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.544804207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6836ebf6-535d-484f-9061-e8360ca346d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.544849446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6836ebf6-535d-484f-9061-e8360ca346d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.545154450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8086997fb9998410ccd34b06ff31e502aa7ea816fcd05a368ca671cb94670aa,PodSandboxId:8f0f9b95ebdf6c095801d3a34767681b5ef2b45328a7c9d1c0cf2173638da4a2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1703507649069261131,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-qw8tb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6748cea-db47-48d2-9fa9-173805bcdf12,},Annotations:map[string]string{io.kubernetes.container.hash: ba18a2ca,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c371f5088ad4556a4387919989bc2fefd54ad77ddd63fb1603be95bf8f7093e0,PodSandboxId:1190e28b24f03e9da0793ff4c30062f9cda7c1fe807301293e27a87112c78d74,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1703507507592559840,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d52d7f6-a780-4dc5-94e1-b327e6f9d7a4,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 1724ba2e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05716ba53a540092f000b00f5a88e81567fcdf4322c5b0f5efd4ce79cb3e1532,PodSandboxId:686ffbdc4c67c8d4cc32ac10df75e815a00aa99f79468fb5bb85f192e8ff5d79,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1703507485220052617,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-2vxdt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a73b346b-c588-4a66-a9ec-f0ea7d80c86d,},Annotations:map[string]string{io.kubernetes.container.hash: aa0320be,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b7b9cb440cbde9ac5a7608d8c94b2b6a104dfad37be2f6cf0a9c0e9afe9bbeb1,PodSandboxId:434f843ed51069a68f9199ed0a53cf0adbbfca930e1e92d494452678f315308d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1703507476065240843,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wh4lr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9f1d93b4-f44a-486c-844c-419f4a8a6606,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7d498f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a871eb25a78e67a546281d95d873312b4bbd2ba6d991586f457a6e7b97ec,PodSandboxId:c1709c0f61ca8102b79f59ba8e6a91adf7f39f4f5ea1777901d7845ff2a344ae,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1703507475760162376,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cpxcj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5596e8a9-395d-483c-9b0d-b9988cf44d4b,},Annotations:map[string]string{io.kubernetes.container.hash: a88673ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e58499087f7275de94798bb7709fa498f55441a78c78c2953c1149c409c68a,PodSandboxId:3a0adad53e190b40fc51d5f36002834fad34a9bfc9af21fa2d45edd33088e58b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1703507466047566594,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-zh7mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4a2a-aca0-4902-8909-16d008ef31e5,},Annotations:map[string]string{io.kubernetes.container.hash: f3a66031,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2c120f8c63297ea5920172c10
4d90898a431f7a896cb99cb5e5eea6b0eb9cd,PodSandboxId:73a4956b61f733bd8c6e7d982d5b0de9b50f3836c1bb6a336bff83832a2a00c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703507464183122770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75392310-d20f-40a7-b547-25da6bc472bf,},Annotations:map[string]string{io.kubernetes.container.hash: 94f3c7b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27828ea2b5deeb71219a189e2d09
cd62abe2a94db05840dac130494e88b24041,PodSandboxId:4ee5215d5a5361893ecbee21bc5dd3ab9a3fe36106dc8a47e0f21488a25fcfbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1703507463659319251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6wjzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09b76ea-4389-4633-9f53-291e249238c6,},Annotations:map[string]string{io.kubernetes.container.hash: 58d0396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46cedb288e21414355cb45d81fb0159466572cfeaea6454557f4dc11fe324b2d,PodS
andboxId:1e5bca2044ada1ae25795057db39622aa2dd9b6b7c91cd21fdf50e60d2e540d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1703507439277947354,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff8e8d9dbc1b938a9886b7ea938604,},Annotations:map[string]string{io.kubernetes.container.hash: 7ec4a67f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9995ba285f505048697f7b8ec49d201469ce8765a5702a0c943bc4a7abab0a7,PodSandboxId:4eb06455b181ca55be4851c6d7f4fa6d6002c
b9a6c56034602b290538d86f934,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1703507438129691218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557108d0824c209762d74d1fb6913635,},Annotations:map[string]string{io.kubernetes.container.hash: 34729005,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c8b403e9d73da9209267223b49334969c793e59fd721d8541fb6fe0dd2398f,PodSandboxId:5453067a1167e4a0542fd58b48843954ba13cd9ab73
7bd910c427475991ae50c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1703507437914075457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9de4b662731d786b978e4b4205b0bd0a7ce08420dbabb6f5e44b32d560e374,PodSandboxId:884232cdcfc9c
774e0cb17e136a38e460ea4ba6f041e341472b85f0f02a2bf54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1703507437955151529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6836ebf6-535d-484f-9061-e8360ca346d9 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.580667727Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=289247d1-ca44-481f-a5fd-3ad0ae67364f name=/runtime.v1.RuntimeService/Version
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.580761813Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=289247d1-ca44-481f-a5fd-3ad0ae67364f name=/runtime.v1.RuntimeService/Version
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.583249718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7f428de6-4989-4ada-803f-7f5cf3aa7d5c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.583731647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703507665583716084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=7f428de6-4989-4ada-803f-7f5cf3aa7d5c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.584542659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cc505cf2-762f-4f77-b383-b948b2662660 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.584618947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cc505cf2-762f-4f77-b383-b948b2662660 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.584902915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8086997fb9998410ccd34b06ff31e502aa7ea816fcd05a368ca671cb94670aa,PodSandboxId:8f0f9b95ebdf6c095801d3a34767681b5ef2b45328a7c9d1c0cf2173638da4a2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1703507649069261131,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-qw8tb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6748cea-db47-48d2-9fa9-173805bcdf12,},Annotations:map[string]string{io.kubernetes.container.hash: ba18a2ca,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c371f5088ad4556a4387919989bc2fefd54ad77ddd63fb1603be95bf8f7093e0,PodSandboxId:1190e28b24f03e9da0793ff4c30062f9cda7c1fe807301293e27a87112c78d74,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1703507507592559840,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d52d7f6-a780-4dc5-94e1-b327e6f9d7a4,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 1724ba2e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05716ba53a540092f000b00f5a88e81567fcdf4322c5b0f5efd4ce79cb3e1532,PodSandboxId:686ffbdc4c67c8d4cc32ac10df75e815a00aa99f79468fb5bb85f192e8ff5d79,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1703507485220052617,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-2vxdt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a73b346b-c588-4a66-a9ec-f0ea7d80c86d,},Annotations:map[string]string{io.kubernetes.container.hash: aa0320be,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b7b9cb440cbde9ac5a7608d8c94b2b6a104dfad37be2f6cf0a9c0e9afe9bbeb1,PodSandboxId:434f843ed51069a68f9199ed0a53cf0adbbfca930e1e92d494452678f315308d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1703507476065240843,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wh4lr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9f1d93b4-f44a-486c-844c-419f4a8a6606,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7d498f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a871eb25a78e67a546281d95d873312b4bbd2ba6d991586f457a6e7b97ec,PodSandboxId:c1709c0f61ca8102b79f59ba8e6a91adf7f39f4f5ea1777901d7845ff2a344ae,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1703507475760162376,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cpxcj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5596e8a9-395d-483c-9b0d-b9988cf44d4b,},Annotations:map[string]string{io.kubernetes.container.hash: a88673ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e58499087f7275de94798bb7709fa498f55441a78c78c2953c1149c409c68a,PodSandboxId:3a0adad53e190b40fc51d5f36002834fad34a9bfc9af21fa2d45edd33088e58b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1703507466047566594,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-zh7mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4a2a-aca0-4902-8909-16d008ef31e5,},Annotations:map[string]string{io.kubernetes.container.hash: f3a66031,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2c120f8c63297ea5920172c10
4d90898a431f7a896cb99cb5e5eea6b0eb9cd,PodSandboxId:73a4956b61f733bd8c6e7d982d5b0de9b50f3836c1bb6a336bff83832a2a00c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703507464183122770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75392310-d20f-40a7-b547-25da6bc472bf,},Annotations:map[string]string{io.kubernetes.container.hash: 94f3c7b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27828ea2b5deeb71219a189e2d09
cd62abe2a94db05840dac130494e88b24041,PodSandboxId:4ee5215d5a5361893ecbee21bc5dd3ab9a3fe36106dc8a47e0f21488a25fcfbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1703507463659319251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6wjzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09b76ea-4389-4633-9f53-291e249238c6,},Annotations:map[string]string{io.kubernetes.container.hash: 58d0396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46cedb288e21414355cb45d81fb0159466572cfeaea6454557f4dc11fe324b2d,PodS
andboxId:1e5bca2044ada1ae25795057db39622aa2dd9b6b7c91cd21fdf50e60d2e540d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1703507439277947354,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff8e8d9dbc1b938a9886b7ea938604,},Annotations:map[string]string{io.kubernetes.container.hash: 7ec4a67f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9995ba285f505048697f7b8ec49d201469ce8765a5702a0c943bc4a7abab0a7,PodSandboxId:4eb06455b181ca55be4851c6d7f4fa6d6002c
b9a6c56034602b290538d86f934,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1703507438129691218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557108d0824c209762d74d1fb6913635,},Annotations:map[string]string{io.kubernetes.container.hash: 34729005,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c8b403e9d73da9209267223b49334969c793e59fd721d8541fb6fe0dd2398f,PodSandboxId:5453067a1167e4a0542fd58b48843954ba13cd9ab73
7bd910c427475991ae50c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1703507437914075457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9de4b662731d786b978e4b4205b0bd0a7ce08420dbabb6f5e44b32d560e374,PodSandboxId:884232cdcfc9c
774e0cb17e136a38e460ea4ba6f041e341472b85f0f02a2bf54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1703507437955151529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cc505cf2-762f-4f77-b383-b948b2662660 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.592384226Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=d52f1d7e-9a9f-406a-b95d-e13a147df110 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.592750512Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8f0f9b95ebdf6c095801d3a34767681b5ef2b45328a7c9d1c0cf2173638da4a2,Metadata:&PodSandboxMetadata{Name:hello-world-app-5f5d8b66bb-qw8tb,Uid:f6748cea-db47-48d2-9fa9-173805bcdf12,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703507646277482545,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-qw8tb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6748cea-db47-48d2-9fa9-173805bcdf12,pod-template-hash: 5f5d8b66bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T12:34:05.924523892Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1190e28b24f03e9da0793ff4c30062f9cda7c1fe807301293e27a87112c78d74,Metadata:&PodSandboxMetadata{Name:nginx,Uid:4d52d7f6-a780-4dc5-94e1-b327e6f9d7a4,Namespace:defau
lt,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703507504818270207,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d52d7f6-a780-4dc5-94e1-b327e6f9d7a4,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T12:31:44.470256004Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c0a0cfe6ed12fb1dfb1e158e0911c6b2eef968fd12db7ae9b03485cafe07722d,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1703507488200340124,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configura
tion: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2023-12-25T12:31:26.346626672Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:686ffbdc4c67c8d4cc32ac10df75e815a00aa99f79468fb5bb85f192e8ff5d79,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-7fcf777cb7-2vxdt,Uid:a73b346b-c588-4a66-a9ec
-f0ea7d80c86d,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1703507477576109524,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-2vxdt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a73b346b-c588-4a66-a9ec-f0ea7d80c86d,pod-template-hash: 7fcf777cb7,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T12:31:13.635152664Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:434f843ed51069a68f9199ed0a53cf0adbbfca930e1e92d494452678f315308d,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-wh4lr,Uid:9f1d93b4-f44a-486c-844c-419f4a8a6606,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1703507474058100148,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/ins
tance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: b2dd73b3-ea5f-4c84-b131-acfcb2a28382,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-wh4lr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9f1d93b4-f44a-486c-844c-419f4a8a6606,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T12:31:13.710409531Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c1709c0f61ca8102b79f59ba8e6a91adf7f39f4f5ea1777901d7845ff2a344ae,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-cpxcj,Uid:5596e8a9-395d-483c-9b0d-b9988cf44d4b,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1703507474019121285,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 543c1c68-f7b1-407d-9475-21fe5130e3df,io.kubernetes.container.name: POD,io.kubernete
s.pod.name: ingress-nginx-admission-create-cpxcj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5596e8a9-395d-483c-9b0d-b9988cf44d4b,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T12:31:13.672184555Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3a0adad53e190b40fc51d5f36002834fad34a9bfc9af21fa2d45edd33088e58b,Metadata:&PodSandboxMetadata{Name:coredns-66bff467f8-zh7mg,Uid:7adf4a2a-aca0-4902-8909-16d008ef31e5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703507465879928662,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bff467f8-zh7mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4a2a-aca0-4902-8909-16d008ef31e5,k8s-app: kube-dns,pod-template-hash: 66bff467f8,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T12:31:04.622633126Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:73a4956b6
1f733bd8c6e7d982d5b0de9b50f3836c1bb6a336bff83832a2a00c3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:75392310-d20f-40a7-b547-25da6bc472bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703507463800417267,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75392310-d20f-40a7-b547-25da6bc472bf,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storag
e-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-25T12:31:03.454137655Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4ee5215d5a5361893ecbee21bc5dd3ab9a3fe36106dc8a47e0f21488a25fcfbf,Metadata:&PodSandboxMetadata{Name:kube-proxy-6wjzf,Uid:d09b76ea-4389-4633-9f53-291e249238c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703507462711421320,Labels:map[string]string{controller-revision-hash: 5bdc57b48f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6wjzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09b76ea-4389-4633-9f53-291e249238c6,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T12:31:02.365864683Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:884232cdcfc9c774e0cb17e136a38e460ea4ba6f041e341472b85f0f02a2bf54,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ingress-addon-legacy-441885,Uid:d12e497b0008e22acbcd5a9cf2dd48ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703507437424772724,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d12e497b0008e22acbcd5a9cf2dd48ac,kubernetes.io/config.seen: 2023-12-25T12:30:36.089518285Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4eb06455b181ca55be4851c6d7f4fa6d6002cb9a6c56034602b290538d86f934,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ingress-addon-legacy-441885,Uid:557108d0824c209762d74d1fb6913635,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703
507437420156441,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557108d0824c209762d74d1fb6913635,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.118:8443,kubernetes.io/config.hash: 557108d0824c209762d74d1fb6913635,kubernetes.io/config.seen: 2023-12-25T12:30:36.089515506Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5453067a1167e4a0542fd58b48843954ba13cd9ab737bd910c427475991ae50c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ingress-addon-legacy-441885,Uid:b395a1e17534e69e27827b1f8d737725,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703507437394772974,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-
441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b395a1e17534e69e27827b1f8d737725,kubernetes.io/config.seen: 2023-12-25T12:30:36.089517028Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1e5bca2044ada1ae25795057db39622aa2dd9b6b7c91cd21fdf50e60d2e540d4,Metadata:&PodSandboxMetadata{Name:etcd-ingress-addon-legacy-441885,Uid:fcff8e8d9dbc1b938a9886b7ea938604,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703507437384728608,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff8e8d9dbc1b938a9886b7ea938604,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.118:2379,kubernetes.io/config.hash: fcff8e8d9dbc1b938a9886b7ea938604,kubernete
s.io/config.seen: 2023-12-25T12:30:36.089505684Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=d52f1d7e-9a9f-406a-b95d-e13a147df110 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.593708516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f9609078-37b0-47be-a815-e8f6f7139336 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.593755674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f9609078-37b0-47be-a815-e8f6f7139336 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 25 12:34:25 ingress-addon-legacy-441885 crio[722]: time="2023-12-25 12:34:25.594075934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8086997fb9998410ccd34b06ff31e502aa7ea816fcd05a368ca671cb94670aa,PodSandboxId:8f0f9b95ebdf6c095801d3a34767681b5ef2b45328a7c9d1c0cf2173638da4a2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1703507649069261131,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-qw8tb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6748cea-db47-48d2-9fa9-173805bcdf12,},Annotations:map[string]string{io.kubernetes.container.hash: ba18a2ca,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c371f5088ad4556a4387919989bc2fefd54ad77ddd63fb1603be95bf8f7093e0,PodSandboxId:1190e28b24f03e9da0793ff4c30062f9cda7c1fe807301293e27a87112c78d74,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1703507507592559840,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d52d7f6-a780-4dc5-94e1-b327e6f9d7a4,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 1724ba2e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05716ba53a540092f000b00f5a88e81567fcdf4322c5b0f5efd4ce79cb3e1532,PodSandboxId:686ffbdc4c67c8d4cc32ac10df75e815a00aa99f79468fb5bb85f192e8ff5d79,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1703507485220052617,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-2vxdt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a73b346b-c588-4a66-a9ec-f0ea7d80c86d,},Annotations:map[string]string{io.kubernetes.container.hash: aa0320be,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b7b9cb440cbde9ac5a7608d8c94b2b6a104dfad37be2f6cf0a9c0e9afe9bbeb1,PodSandboxId:434f843ed51069a68f9199ed0a53cf0adbbfca930e1e92d494452678f315308d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1703507476065240843,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wh4lr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9f1d93b4-f44a-486c-844c-419f4a8a6606,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7d498f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a871eb25a78e67a546281d95d873312b4bbd2ba6d991586f457a6e7b97ec,PodSandboxId:c1709c0f61ca8102b79f59ba8e6a91adf7f39f4f5ea1777901d7845ff2a344ae,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1703507475760162376,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cpxcj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5596e8a9-395d-483c-9b0d-b9988cf44d4b,},Annotations:map[string]string{io.kubernetes.container.hash: a88673ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e58499087f7275de94798bb7709fa498f55441a78c78c2953c1149c409c68a,PodSandboxId:3a0adad53e190b40fc51d5f36002834fad34a9bfc9af21fa2d45edd33088e58b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1703507466047566594,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-zh7mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4a2a-aca0-4902-8909-16d008ef31e5,},Annotations:map[string]string{io.kubernetes.container.hash: f3a66031,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2c120f8c63297ea5920172c10
4d90898a431f7a896cb99cb5e5eea6b0eb9cd,PodSandboxId:73a4956b61f733bd8c6e7d982d5b0de9b50f3836c1bb6a336bff83832a2a00c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703507464183122770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75392310-d20f-40a7-b547-25da6bc472bf,},Annotations:map[string]string{io.kubernetes.container.hash: 94f3c7b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27828ea2b5deeb71219a189e2d09
cd62abe2a94db05840dac130494e88b24041,PodSandboxId:4ee5215d5a5361893ecbee21bc5dd3ab9a3fe36106dc8a47e0f21488a25fcfbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1703507463659319251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6wjzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09b76ea-4389-4633-9f53-291e249238c6,},Annotations:map[string]string{io.kubernetes.container.hash: 58d0396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46cedb288e21414355cb45d81fb0159466572cfeaea6454557f4dc11fe324b2d,PodS
andboxId:1e5bca2044ada1ae25795057db39622aa2dd9b6b7c91cd21fdf50e60d2e540d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1703507439277947354,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff8e8d9dbc1b938a9886b7ea938604,},Annotations:map[string]string{io.kubernetes.container.hash: 7ec4a67f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9995ba285f505048697f7b8ec49d201469ce8765a5702a0c943bc4a7abab0a7,PodSandboxId:4eb06455b181ca55be4851c6d7f4fa6d6002c
b9a6c56034602b290538d86f934,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1703507438129691218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557108d0824c209762d74d1fb6913635,},Annotations:map[string]string{io.kubernetes.container.hash: 34729005,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c8b403e9d73da9209267223b49334969c793e59fd721d8541fb6fe0dd2398f,PodSandboxId:5453067a1167e4a0542fd58b48843954ba13cd9ab73
7bd910c427475991ae50c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1703507437914075457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9de4b662731d786b978e4b4205b0bd0a7ce08420dbabb6f5e44b32d560e374,PodSandboxId:884232cdcfc9c
774e0cb17e136a38e460ea4ba6f041e341472b85f0f02a2bf54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1703507437955151529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-441885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f9609078-37b0-47be-a815-e8f6f7139336 name=/runtime.v1alpha2.Runti
meService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b8086997fb999       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            16 seconds ago      Running             hello-world-app           0                   8f0f9b95ebdf6       hello-world-app-5f5d8b66bb-qw8tb
	c371f5088ad45       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   1190e28b24f03       nginx
	05716ba53a540       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   686ffbdc4c67c       ingress-nginx-controller-7fcf777cb7-2vxdt
	b7b9cb440cbde       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   434f843ed5106       ingress-nginx-admission-patch-wh4lr
	fd34a871eb25a       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   c1709c0f61ca8       ingress-nginx-admission-create-cpxcj
	19e58499087f7       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   3a0adad53e190       coredns-66bff467f8-zh7mg
	9e2c120f8c632       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   73a4956b61f73       storage-provisioner
	27828ea2b5dee       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   4ee5215d5a536       kube-proxy-6wjzf
	46cedb288e214       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   1e5bca2044ada       etcd-ingress-addon-legacy-441885
	a9995ba285f50       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   4eb06455b181c       kube-apiserver-ingress-addon-legacy-441885
	5b9de4b662731       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   884232cdcfc9c       kube-scheduler-ingress-addon-legacy-441885
	31c8b403e9d73       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   5453067a1167e       kube-controller-manager-ingress-addon-legacy-441885
	
	
	==> coredns [19e58499087f7275de94798bb7709fa498f55441a78c78c2953c1149c409c68a] <==
	[INFO] 10.244.0.5:32968 - 41159 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.001516213s
	[INFO] 10.244.0.5:48015 - 18945 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000168521s
	[INFO] 10.244.0.5:32968 - 48816 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042105s
	[INFO] 10.244.0.5:48015 - 10230 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000148454s
	[INFO] 10.244.0.5:32968 - 51391 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039467s
	[INFO] 10.244.0.5:48015 - 56916 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00008612s
	[INFO] 10.244.0.5:32968 - 57711 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005236s
	[INFO] 10.244.0.5:48015 - 62222 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000168583s
	[INFO] 10.244.0.5:32968 - 18174 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035218s
	[INFO] 10.244.0.5:32968 - 17861 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000086341s
	[INFO] 10.244.0.5:32968 - 26025 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081062s
	[INFO] 10.244.0.5:46062 - 53341 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000082024s
	[INFO] 10.244.0.5:46062 - 65488 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000077558s
	[INFO] 10.244.0.5:41888 - 21387 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00003747s
	[INFO] 10.244.0.5:41888 - 18587 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000060096s
	[INFO] 10.244.0.5:41888 - 28204 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000073079s
	[INFO] 10.244.0.5:46062 - 41294 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000029865s
	[INFO] 10.244.0.5:46062 - 32728 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077242s
	[INFO] 10.244.0.5:41888 - 28834 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054736s
	[INFO] 10.244.0.5:41888 - 5443 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000072012s
	[INFO] 10.244.0.5:46062 - 32081 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055022s
	[INFO] 10.244.0.5:41888 - 3438 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000456229s
	[INFO] 10.244.0.5:46062 - 37649 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000193284s
	[INFO] 10.244.0.5:46062 - 11274 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050039s
	[INFO] 10.244.0.5:41888 - 48519 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038558s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-441885
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-441885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=ingress-addon-legacy-441885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_25T12_30_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 12:30:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-441885
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Dec 2023 12:34:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 12:34:17 +0000   Mon, 25 Dec 2023 12:30:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 12:34:17 +0000   Mon, 25 Dec 2023 12:30:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 12:34:17 +0000   Mon, 25 Dec 2023 12:30:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 12:34:17 +0000   Mon, 25 Dec 2023 12:30:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    ingress-addon-legacy-441885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	System Info:
	  Machine ID:                 783e9791369e472cb76edb28582a9b3c
	  System UUID:                783e9791-369e-472c-b76e-db28582a9b3c
	  Boot ID:                    30868864-8a01-4e0e-9cea-67536c0f3516
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-qw8tb                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 coredns-66bff467f8-zh7mg                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m23s
	  kube-system                 etcd-ingress-addon-legacy-441885                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-apiserver-ingress-addon-legacy-441885             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-441885    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-proxy-6wjzf                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	  kube-system                 kube-scheduler-ingress-addon-legacy-441885             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m39s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s  kubelet     Node ingress-addon-legacy-441885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s  kubelet     Node ingress-addon-legacy-441885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s  kubelet     Node ingress-addon-legacy-441885 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m38s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m29s  kubelet     Node ingress-addon-legacy-441885 status is now: NodeReady
	  Normal  Starting                 3m22s  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Dec25 12:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.095122] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.524889] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.515758] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152768] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.065660] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.321265] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.104765] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.138734] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.098701] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.207306] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[  +8.080111] systemd-fstab-generator[1032]: Ignoring "noauto" for root device
	[  +3.141633] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.852965] systemd-fstab-generator[1422]: Ignoring "noauto" for root device
	[Dec25 12:31] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.181527] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.077133] kauditd_printk_skb: 6 callbacks suppressed
	[ +27.865036] kauditd_printk_skb: 7 callbacks suppressed
	[Dec25 12:34] kauditd_printk_skb: 5 callbacks suppressed
	[ +11.648441] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [46cedb288e21414355cb45d81fb0159466572cfeaea6454557f4dc11fe324b2d] <==
	raft2023/12/25 12:30:39 INFO: newRaft 86c29206b457f123 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/25 12:30:39 INFO: 86c29206b457f123 became follower at term 1
	raft2023/12/25 12:30:39 INFO: 86c29206b457f123 switched to configuration voters=(9710484304057332003)
	2023-12-25 12:30:39.430989 W | auth: simple token is not cryptographically signed
	2023-12-25 12:30:39.437461 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-25 12:30:39.438142 I | etcdserver: 86c29206b457f123 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/25 12:30:39 INFO: 86c29206b457f123 switched to configuration voters=(9710484304057332003)
	2023-12-25 12:30:39.438750 I | etcdserver/membership: added member 86c29206b457f123 [https://192.168.39.118:2380] to cluster 56e4fbef5627b38f
	2023-12-25 12:30:39.443942 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-25 12:30:39.444212 I | embed: listening for peers on 192.168.39.118:2380
	2023-12-25 12:30:39.444592 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/12/25 12:30:40 INFO: 86c29206b457f123 is starting a new election at term 1
	raft2023/12/25 12:30:40 INFO: 86c29206b457f123 became candidate at term 2
	raft2023/12/25 12:30:40 INFO: 86c29206b457f123 received MsgVoteResp from 86c29206b457f123 at term 2
	raft2023/12/25 12:30:40 INFO: 86c29206b457f123 became leader at term 2
	raft2023/12/25 12:30:40 INFO: raft.node: 86c29206b457f123 elected leader 86c29206b457f123 at term 2
	2023-12-25 12:30:40.422649 I | etcdserver: published {Name:ingress-addon-legacy-441885 ClientURLs:[https://192.168.39.118:2379]} to cluster 56e4fbef5627b38f
	2023-12-25 12:30:40.422760 I | embed: ready to serve client requests
	2023-12-25 12:30:40.422974 I | embed: ready to serve client requests
	2023-12-25 12:30:40.424390 I | embed: serving client requests on 192.168.39.118:2379
	2023-12-25 12:30:40.426226 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-25 12:30:40.426463 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-25 12:30:40.427081 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-25 12:30:40.427167 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-25 12:31:02.025427 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" " with result "range_response_count:1 size:210" took too long (480.085728ms) to execute
	
	
	==> kernel <==
	 12:34:25 up 4 min,  0 users,  load average: 2.66, 1.12, 0.46
	Linux ingress-addon-legacy-441885 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a9995ba285f505048697f7b8ec49d201469ce8765a5702a0c943bc4a7abab0a7] <==
	I1225 12:30:43.436495       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1225 12:30:43.436554       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I1225 12:30:43.490813       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1225 12:30:43.492609       1 cache.go:39] Caches are synced for autoregister controller
	I1225 12:30:43.501554       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1225 12:30:43.501627       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1225 12:30:43.536601       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1225 12:30:44.383531       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1225 12:30:44.383553       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1225 12:30:44.393211       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1225 12:30:44.399268       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1225 12:30:44.399413       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1225 12:30:44.953338       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 12:30:45.013845       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1225 12:30:45.187963       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.118]
	I1225 12:30:45.188976       1 controller.go:609] quota admission added evaluator for: endpoints
	I1225 12:30:45.192814       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 12:30:45.750768       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1225 12:30:46.528842       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1225 12:30:46.629290       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1225 12:30:46.895726       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 12:31:02.346452       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1225 12:31:02.584723       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1225 12:31:13.584178       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1225 12:31:44.268429       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [31c8b403e9d73da9209267223b49334969c793e59fd721d8541fb6fe0dd2398f] <==
	I1225 12:31:02.687728       1 request.go:621] Throttling request took 1.053907397s, request: GET:https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	I1225 12:31:02.689557       1 shared_informer.go:230] Caches are synced for taint 
	I1225 12:31:02.689935       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W1225 12:31:02.690316       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-441885. Assuming now as a timestamp.
	I1225 12:31:02.691620       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I1225 12:31:02.691361       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1225 12:31:02.691574       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-441885", UID:"d320810b-f146-4ec0-8e6d-1053100fa243", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-441885 event: Registered Node ingress-addon-legacy-441885 in Controller
	I1225 12:31:02.763393       1 shared_informer.go:230] Caches are synced for stateful set 
	I1225 12:31:02.782771       1 shared_informer.go:230] Caches are synced for disruption 
	I1225 12:31:02.783042       1 disruption.go:339] Sending events to api server.
	I1225 12:31:02.835296       1 shared_informer.go:230] Caches are synced for resource quota 
	I1225 12:31:02.874280       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1225 12:31:02.874322       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1225 12:31:02.882980       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1225 12:31:03.284842       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I1225 12:31:03.284906       1 shared_informer.go:230] Caches are synced for resource quota 
	I1225 12:31:13.574248       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"abb451fc-793c-4af5-abb9-099e51617509", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1225 12:31:13.602770       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"a1ebe9a4-989d-4232-a750-a4bc394429cd", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-2vxdt
	I1225 12:31:13.617469       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"543c1c68-f7b1-407d-9475-21fe5130e3df", APIVersion:"batch/v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-cpxcj
	I1225 12:31:13.697903       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b2dd73b3-ea5f-4c84-b131-acfcb2a28382", APIVersion:"batch/v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-wh4lr
	I1225 12:31:16.197155       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"543c1c68-f7b1-407d-9475-21fe5130e3df", APIVersion:"batch/v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1225 12:31:17.187798       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b2dd73b3-ea5f-4c84-b131-acfcb2a28382", APIVersion:"batch/v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1225 12:34:05.884422       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"3f3d13a4-3693-48f0-baed-9398e82304c9", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1225 12:34:05.926719       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"3878fabb-20dc-4210-aadd-c1ef2b26893c", APIVersion:"apps/v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-qw8tb
	E1225 12:34:22.643844       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-t76sb" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [27828ea2b5deeb71219a189e2d09cd62abe2a94db05840dac130494e88b24041] <==
	W1225 12:31:03.915093       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1225 12:31:03.925380       1 node.go:136] Successfully retrieved node IP: 192.168.39.118
	I1225 12:31:03.925462       1 server_others.go:186] Using iptables Proxier.
	I1225 12:31:03.925950       1 server.go:583] Version: v1.18.20
	I1225 12:31:03.929608       1 config.go:315] Starting service config controller
	I1225 12:31:03.929657       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1225 12:31:03.929876       1 config.go:133] Starting endpoints config controller
	I1225 12:31:03.929887       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1225 12:31:04.030143       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1225 12:31:04.030271       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [5b9de4b662731d786b978e4b4205b0bd0a7ce08420dbabb6f5e44b32d560e374] <==
	I1225 12:30:43.495149       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1225 12:30:43.505577       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1225 12:30:43.505647       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1225 12:30:43.507292       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1225 12:30:43.509724       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1225 12:30:43.511525       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1225 12:30:43.513681       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1225 12:30:43.513783       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1225 12:30:43.513900       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1225 12:30:43.514514       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1225 12:30:43.514600       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1225 12:30:43.515118       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1225 12:30:43.515218       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1225 12:30:43.515880       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1225 12:30:43.524444       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1225 12:30:43.524651       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1225 12:30:43.524716       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1225 12:30:44.347515       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1225 12:30:44.506348       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1225 12:30:44.547977       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1225 12:30:44.558263       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1225 12:30:44.681291       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1225 12:30:44.954798       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1225 12:30:47.605915       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1225 12:31:02.670192       1 factory.go:503] pod: kube-system/coredns-66bff467f8-zh7mg is already present in the active queue
	
	
	==> kubelet <==
	-- Journal begins at Mon 2023-12-25 12:30:11 UTC, ends at Mon 2023-12-25 12:34:26 UTC. --
	Dec 25 12:31:26 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:31:26.347079    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 25 12:31:26 ingress-addon-legacy-441885 kubelet[1428]: E1225 12:31:26.352529    1428 reflector.go:178] object-"kube-system"/"minikube-ingress-dns-token-xj7rn": Failed to list *v1.Secret: secrets "minikube-ingress-dns-token-xj7rn" is forbidden: User "system:node:ingress-addon-legacy-441885" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "ingress-addon-legacy-441885" and this object
	Dec 25 12:31:26 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:31:26.383198    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-xj7rn" (UniqueName: "kubernetes.io/secret/f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0-minikube-ingress-dns-token-xj7rn") pod "kube-ingress-dns-minikube" (UID: "f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0")
	Dec 25 12:31:27 ingress-addon-legacy-441885 kubelet[1428]: E1225 12:31:27.483946    1428 secret.go:195] Couldn't get secret kube-system/minikube-ingress-dns-token-xj7rn: failed to sync secret cache: timed out waiting for the condition
	Dec 25 12:31:27 ingress-addon-legacy-441885 kubelet[1428]: E1225 12:31:27.484150    1428 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0-minikube-ingress-dns-token-xj7rn podName:f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0 nodeName:}" failed. No retries permitted until 2023-12-25 12:31:27.984118589 +0000 UTC m=+41.541059438 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"minikube-ingress-dns-token-xj7rn\" (UniqueName: \"kubernetes.io/secret/f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0-minikube-ingress-dns-token-xj7rn\") pod \"kube-ingress-dns-minikube\" (UID: \"f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0\") : failed to sync secret cache: timed out waiting for the condition"
	Dec 25 12:31:44 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:31:44.470500    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 25 12:31:44 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:31:44.545287    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-d4kkp" (UniqueName: "kubernetes.io/secret/4d52d7f6-a780-4dc5-94e1-b327e6f9d7a4-default-token-d4kkp") pod "nginx" (UID: "4d52d7f6-a780-4dc5-94e1-b327e6f9d7a4")
	Dec 25 12:34:05 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:05.925499    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 25 12:34:05 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:05.948388    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-d4kkp" (UniqueName: "kubernetes.io/secret/f6748cea-db47-48d2-9fa9-173805bcdf12-default-token-d4kkp") pod "hello-world-app-5f5d8b66bb-qw8tb" (UID: "f6748cea-db47-48d2-9fa9-173805bcdf12")
	Dec 25 12:34:07 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:07.526622    1428 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 02dc1f14adc7d1755cb82dd5c7732a4325d9828cfeeda59682db161bf616741c
	Dec 25 12:34:07 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:07.654248    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-xj7rn" (UniqueName: "kubernetes.io/secret/f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0-minikube-ingress-dns-token-xj7rn") pod "f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0" (UID: "f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0")
	Dec 25 12:34:07 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:07.675529    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0-minikube-ingress-dns-token-xj7rn" (OuterVolumeSpecName: "minikube-ingress-dns-token-xj7rn") pod "f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0" (UID: "f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0"). InnerVolumeSpecName "minikube-ingress-dns-token-xj7rn". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 25 12:34:07 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:07.754636    1428 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-xj7rn" (UniqueName: "kubernetes.io/secret/f70d9cd3-85b2-4d5e-b7bd-516b5989d6c0-minikube-ingress-dns-token-xj7rn") on node "ingress-addon-legacy-441885" DevicePath ""
	Dec 25 12:34:07 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:07.834373    1428 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 02dc1f14adc7d1755cb82dd5c7732a4325d9828cfeeda59682db161bf616741c
	Dec 25 12:34:07 ingress-addon-legacy-441885 kubelet[1428]: E1225 12:34:07.851246    1428 remote_runtime.go:295] ContainerStatus "02dc1f14adc7d1755cb82dd5c7732a4325d9828cfeeda59682db161bf616741c" from runtime service failed: rpc error: code = NotFound desc = could not find container "02dc1f14adc7d1755cb82dd5c7732a4325d9828cfeeda59682db161bf616741c": container with ID starting with 02dc1f14adc7d1755cb82dd5c7732a4325d9828cfeeda59682db161bf616741c not found: ID does not exist
	Dec 25 12:34:18 ingress-addon-legacy-441885 kubelet[1428]: E1225 12:34:18.054114    1428 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-2vxdt.17a413307e0fdb57", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-2vxdt", UID:"a73b346b-c588-4a66-a9ec-f0ea7d80c86d", APIVersion:"v1", ResourceVersion:"431", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-441885"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15a7b9282d07757, ext:211604157309, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15a7b9282d07757, ext:211604157309, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-2vxdt.17a413307e0fdb57" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 25 12:34:18 ingress-addon-legacy-441885 kubelet[1428]: E1225 12:34:18.072641    1428 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-2vxdt.17a413307e0fdb57", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-2vxdt", UID:"a73b346b-c588-4a66-a9ec-f0ea7d80c86d", APIVersion:"v1", ResourceVersion:"431", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-441885"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15a7b9282d07757, ext:211604157309, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15a7b9283e08560, ext:211621986696, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-2vxdt.17a413307e0fdb57" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 25 12:34:20 ingress-addon-legacy-441885 kubelet[1428]: W1225 12:34:20.577379    1428 pod_container_deletor.go:77] Container "686ffbdc4c67c8d4cc32ac10df75e815a00aa99f79468fb5bb85f192e8ff5d79" not found in pod's containers
	Dec 25 12:34:22 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:22.206603    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a73b346b-c588-4a66-a9ec-f0ea7d80c86d-webhook-cert") pod "a73b346b-c588-4a66-a9ec-f0ea7d80c86d" (UID: "a73b346b-c588-4a66-a9ec-f0ea7d80c86d")
	Dec 25 12:34:22 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:22.206664    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-n5btv" (UniqueName: "kubernetes.io/secret/a73b346b-c588-4a66-a9ec-f0ea7d80c86d-ingress-nginx-token-n5btv") pod "a73b346b-c588-4a66-a9ec-f0ea7d80c86d" (UID: "a73b346b-c588-4a66-a9ec-f0ea7d80c86d")
	Dec 25 12:34:22 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:22.211704    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a73b346b-c588-4a66-a9ec-f0ea7d80c86d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a73b346b-c588-4a66-a9ec-f0ea7d80c86d" (UID: "a73b346b-c588-4a66-a9ec-f0ea7d80c86d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 25 12:34:22 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:22.212255    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a73b346b-c588-4a66-a9ec-f0ea7d80c86d-ingress-nginx-token-n5btv" (OuterVolumeSpecName: "ingress-nginx-token-n5btv") pod "a73b346b-c588-4a66-a9ec-f0ea7d80c86d" (UID: "a73b346b-c588-4a66-a9ec-f0ea7d80c86d"). InnerVolumeSpecName "ingress-nginx-token-n5btv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 25 12:34:22 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:22.307129    1428 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a73b346b-c588-4a66-a9ec-f0ea7d80c86d-webhook-cert") on node "ingress-addon-legacy-441885" DevicePath ""
	Dec 25 12:34:22 ingress-addon-legacy-441885 kubelet[1428]: I1225 12:34:22.307179    1428 reconciler.go:319] Volume detached for volume "ingress-nginx-token-n5btv" (UniqueName: "kubernetes.io/secret/a73b346b-c588-4a66-a9ec-f0ea7d80c86d-ingress-nginx-token-n5btv") on node "ingress-addon-legacy-441885" DevicePath ""
	Dec 25 12:34:23 ingress-addon-legacy-441885 kubelet[1428]: W1225 12:34:23.064663    1428 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/a73b346b-c588-4a66-a9ec-f0ea7d80c86d/volumes" does not exist
	
	
	==> storage-provisioner [9e2c120f8c63297ea5920172c104d90898a431f7a896cb99cb5e5eea6b0eb9cd] <==
	I1225 12:31:04.300171       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 12:31:04.311282       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 12:31:04.311355       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 12:31:04.319285       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 12:31:04.319639       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-441885_a2b0a7ae-ff34-4af9-8440-7961d74373e3!
	I1225 12:31:04.320872       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2cad6a5-fc01-4976-91f9-5e8589f216ea", APIVersion:"v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-441885_a2b0a7ae-ff34-4af9-8440-7961d74373e3 became leader
	I1225 12:31:04.420320       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-441885_a2b0a7ae-ff34-4af9-8440-7961d74373e3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-441885 -n ingress-addon-legacy-441885
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-441885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (180.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- exec busybox-5bc68d56bd-qn48b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- exec busybox-5bc68d56bd-qn48b -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-544936 -- exec busybox-5bc68d56bd-qn48b -- sh -c "ping -c 1 192.168.39.1": exit status 1 (196.386615ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-qn48b): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- exec busybox-5bc68d56bd-z5f74 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- exec busybox-5bc68d56bd-z5f74 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-544936 -- exec busybox-5bc68d56bd-z5f74 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (194.808878ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-z5f74): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-544936 -n multinode-544936
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-544936 logs -n 25: (1.369112436s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-555001 ssh -- ls                    | mount-start-2-555001 | jenkins | v1.32.0 | 25 Dec 23 12:38 UTC | 25 Dec 23 12:38 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-555001 ssh --                       | mount-start-2-555001 | jenkins | v1.32.0 | 25 Dec 23 12:38 UTC | 25 Dec 23 12:38 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-555001                           | mount-start-2-555001 | jenkins | v1.32.0 | 25 Dec 23 12:38 UTC | 25 Dec 23 12:38 UTC |
	| start   | -p mount-start-2-555001                           | mount-start-2-555001 | jenkins | v1.32.0 | 25 Dec 23 12:38 UTC | 25 Dec 23 12:38 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-555001 | jenkins | v1.32.0 | 25 Dec 23 12:38 UTC |                     |
	|         | --profile mount-start-2-555001                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-555001 ssh -- ls                    | mount-start-2-555001 | jenkins | v1.32.0 | 25 Dec 23 12:38 UTC | 25 Dec 23 12:38 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-555001 ssh --                       | mount-start-2-555001 | jenkins | v1.32.0 | 25 Dec 23 12:38 UTC | 25 Dec 23 12:38 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-555001                           | mount-start-2-555001 | jenkins | v1.32.0 | 25 Dec 23 12:38 UTC | 25 Dec 23 12:38 UTC |
	| delete  | -p mount-start-1-537748                           | mount-start-1-537748 | jenkins | v1.32.0 | 25 Dec 23 12:38 UTC | 25 Dec 23 12:38 UTC |
	| start   | -p multinode-544936                               | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:38 UTC | 25 Dec 23 12:40 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- apply -f                   | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- rollout                    | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- get pods -o                | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- get pods -o                | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- exec                       | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | busybox-5bc68d56bd-qn48b --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- exec                       | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | busybox-5bc68d56bd-z5f74 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- exec                       | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | busybox-5bc68d56bd-qn48b --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- exec                       | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | busybox-5bc68d56bd-z5f74 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- exec                       | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | busybox-5bc68d56bd-qn48b -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- exec                       | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | busybox-5bc68d56bd-z5f74 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- get pods -o                | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- exec                       | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | busybox-5bc68d56bd-qn48b                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- exec                       | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC |                     |
	|         | busybox-5bc68d56bd-qn48b -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- exec                       | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC | 25 Dec 23 12:40 UTC |
	|         | busybox-5bc68d56bd-z5f74                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-544936 -- exec                       | multinode-544936     | jenkins | v1.32.0 | 25 Dec 23 12:40 UTC |                     |
	|         | busybox-5bc68d56bd-z5f74 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 12:38:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 12:38:47.548919 1463142 out.go:296] Setting OutFile to fd 1 ...
	I1225 12:38:47.549209 1463142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:38:47.549218 1463142 out.go:309] Setting ErrFile to fd 2...
	I1225 12:38:47.549223 1463142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:38:47.549437 1463142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 12:38:47.550084 1463142 out.go:303] Setting JSON to false
	I1225 12:38:47.551137 1463142 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":156081,"bootTime":1703351847,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 12:38:47.551212 1463142 start.go:138] virtualization: kvm guest
	I1225 12:38:47.553572 1463142 out.go:177] * [multinode-544936] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 12:38:47.554976 1463142 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 12:38:47.555006 1463142 notify.go:220] Checking for updates...
	I1225 12:38:47.556180 1463142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 12:38:47.557490 1463142 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:38:47.558639 1463142 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:38:47.559812 1463142 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 12:38:47.561148 1463142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 12:38:47.562863 1463142 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 12:38:47.600328 1463142 out.go:177] * Using the kvm2 driver based on user configuration
	I1225 12:38:47.601561 1463142 start.go:298] selected driver: kvm2
	I1225 12:38:47.601571 1463142 start.go:902] validating driver "kvm2" against <nil>
	I1225 12:38:47.601583 1463142 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 12:38:47.602357 1463142 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 12:38:47.602485 1463142 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 12:38:47.617909 1463142 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 12:38:47.618003 1463142 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1225 12:38:47.618224 1463142 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 12:38:47.618283 1463142 cni.go:84] Creating CNI manager for ""
	I1225 12:38:47.618299 1463142 cni.go:136] 0 nodes found, recommending kindnet
	I1225 12:38:47.618310 1463142 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1225 12:38:47.618357 1463142 start_flags.go:323] config:
	{Name:multinode-544936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:38:47.618515 1463142 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 12:38:47.620586 1463142 out.go:177] * Starting control plane node multinode-544936 in cluster multinode-544936
	I1225 12:38:47.621857 1463142 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 12:38:47.621902 1463142 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1225 12:38:47.621912 1463142 cache.go:56] Caching tarball of preloaded images
	I1225 12:38:47.622011 1463142 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 12:38:47.622024 1463142 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1225 12:38:47.622359 1463142 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/config.json ...
	I1225 12:38:47.622391 1463142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/config.json: {Name:mk2f212e242e486874cfc491a76b0d4127f70755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:38:47.622569 1463142 start.go:365] acquiring machines lock for multinode-544936: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 12:38:47.622608 1463142 start.go:369] acquired machines lock for "multinode-544936" in 19.855µs
	I1225 12:38:47.622627 1463142 start.go:93] Provisioning new machine with config: &{Name:multinode-544936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 12:38:47.622685 1463142 start.go:125] createHost starting for "" (driver="kvm2")
	I1225 12:38:47.624265 1463142 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1225 12:38:47.624396 1463142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:38:47.624442 1463142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:38:47.639337 1463142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I1225 12:38:47.639805 1463142 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:38:47.640333 1463142 main.go:141] libmachine: Using API Version  1
	I1225 12:38:47.640356 1463142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:38:47.640713 1463142 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:38:47.640908 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetMachineName
	I1225 12:38:47.641099 1463142 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:38:47.641290 1463142 start.go:159] libmachine.API.Create for "multinode-544936" (driver="kvm2")
	I1225 12:38:47.641342 1463142 client.go:168] LocalClient.Create starting
	I1225 12:38:47.641377 1463142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem
	I1225 12:38:47.641421 1463142 main.go:141] libmachine: Decoding PEM data...
	I1225 12:38:47.641444 1463142 main.go:141] libmachine: Parsing certificate...
	I1225 12:38:47.641506 1463142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem
	I1225 12:38:47.641528 1463142 main.go:141] libmachine: Decoding PEM data...
	I1225 12:38:47.641548 1463142 main.go:141] libmachine: Parsing certificate...
	I1225 12:38:47.641567 1463142 main.go:141] libmachine: Running pre-create checks...
	I1225 12:38:47.641577 1463142 main.go:141] libmachine: (multinode-544936) Calling .PreCreateCheck
	I1225 12:38:47.641920 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetConfigRaw
	I1225 12:38:47.642374 1463142 main.go:141] libmachine: Creating machine...
	I1225 12:38:47.642392 1463142 main.go:141] libmachine: (multinode-544936) Calling .Create
	I1225 12:38:47.642535 1463142 main.go:141] libmachine: (multinode-544936) Creating KVM machine...
	I1225 12:38:47.643900 1463142 main.go:141] libmachine: (multinode-544936) DBG | found existing default KVM network
	I1225 12:38:47.644834 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:47.644680 1463165 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015340}
	I1225 12:38:47.650054 1463142 main.go:141] libmachine: (multinode-544936) DBG | trying to create private KVM network mk-multinode-544936 192.168.39.0/24...
	I1225 12:38:47.727910 1463142 main.go:141] libmachine: (multinode-544936) Setting up store path in /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936 ...
	I1225 12:38:47.727944 1463142 main.go:141] libmachine: (multinode-544936) DBG | private KVM network mk-multinode-544936 192.168.39.0/24 created
	I1225 12:38:47.727959 1463142 main.go:141] libmachine: (multinode-544936) Building disk image from file:///home/jenkins/minikube-integration/17847-1442600/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I1225 12:38:47.727988 1463142 main.go:141] libmachine: (multinode-544936) Downloading /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17847-1442600/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1225 12:38:47.728011 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:47.727834 1463165 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:38:47.972279 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:47.972153 1463165 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa...
	I1225 12:38:48.083862 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:48.083702 1463165 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/multinode-544936.rawdisk...
	I1225 12:38:48.083903 1463142 main.go:141] libmachine: (multinode-544936) DBG | Writing magic tar header
	I1225 12:38:48.083918 1463142 main.go:141] libmachine: (multinode-544936) DBG | Writing SSH key tar header
	I1225 12:38:48.083926 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:48.083819 1463165 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936 ...
	I1225 12:38:48.083971 1463142 main.go:141] libmachine: (multinode-544936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936
	I1225 12:38:48.083986 1463142 main.go:141] libmachine: (multinode-544936) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936 (perms=drwx------)
	I1225 12:38:48.084032 1463142 main.go:141] libmachine: (multinode-544936) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines (perms=drwxr-xr-x)
	I1225 12:38:48.084053 1463142 main.go:141] libmachine: (multinode-544936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines
	I1225 12:38:48.084066 1463142 main.go:141] libmachine: (multinode-544936) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube (perms=drwxr-xr-x)
	I1225 12:38:48.084082 1463142 main.go:141] libmachine: (multinode-544936) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600 (perms=drwxrwxr-x)
	I1225 12:38:48.084092 1463142 main.go:141] libmachine: (multinode-544936) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1225 12:38:48.084099 1463142 main.go:141] libmachine: (multinode-544936) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1225 12:38:48.084104 1463142 main.go:141] libmachine: (multinode-544936) Creating domain...
	I1225 12:38:48.084118 1463142 main.go:141] libmachine: (multinode-544936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:38:48.084129 1463142 main.go:141] libmachine: (multinode-544936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600
	I1225 12:38:48.084171 1463142 main.go:141] libmachine: (multinode-544936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1225 12:38:48.084184 1463142 main.go:141] libmachine: (multinode-544936) DBG | Checking permissions on dir: /home/jenkins
	I1225 12:38:48.084192 1463142 main.go:141] libmachine: (multinode-544936) DBG | Checking permissions on dir: /home
	I1225 12:38:48.084198 1463142 main.go:141] libmachine: (multinode-544936) DBG | Skipping /home - not owner
	I1225 12:38:48.085442 1463142 main.go:141] libmachine: (multinode-544936) define libvirt domain using xml: 
	I1225 12:38:48.085471 1463142 main.go:141] libmachine: (multinode-544936) <domain type='kvm'>
	I1225 12:38:48.085505 1463142 main.go:141] libmachine: (multinode-544936)   <name>multinode-544936</name>
	I1225 12:38:48.085536 1463142 main.go:141] libmachine: (multinode-544936)   <memory unit='MiB'>2200</memory>
	I1225 12:38:48.085548 1463142 main.go:141] libmachine: (multinode-544936)   <vcpu>2</vcpu>
	I1225 12:38:48.085572 1463142 main.go:141] libmachine: (multinode-544936)   <features>
	I1225 12:38:48.085586 1463142 main.go:141] libmachine: (multinode-544936)     <acpi/>
	I1225 12:38:48.085595 1463142 main.go:141] libmachine: (multinode-544936)     <apic/>
	I1225 12:38:48.085600 1463142 main.go:141] libmachine: (multinode-544936)     <pae/>
	I1225 12:38:48.085608 1463142 main.go:141] libmachine: (multinode-544936)     
	I1225 12:38:48.085614 1463142 main.go:141] libmachine: (multinode-544936)   </features>
	I1225 12:38:48.085620 1463142 main.go:141] libmachine: (multinode-544936)   <cpu mode='host-passthrough'>
	I1225 12:38:48.085632 1463142 main.go:141] libmachine: (multinode-544936)   
	I1225 12:38:48.085645 1463142 main.go:141] libmachine: (multinode-544936)   </cpu>
	I1225 12:38:48.085656 1463142 main.go:141] libmachine: (multinode-544936)   <os>
	I1225 12:38:48.085673 1463142 main.go:141] libmachine: (multinode-544936)     <type>hvm</type>
	I1225 12:38:48.085683 1463142 main.go:141] libmachine: (multinode-544936)     <boot dev='cdrom'/>
	I1225 12:38:48.085690 1463142 main.go:141] libmachine: (multinode-544936)     <boot dev='hd'/>
	I1225 12:38:48.085697 1463142 main.go:141] libmachine: (multinode-544936)     <bootmenu enable='no'/>
	I1225 12:38:48.085705 1463142 main.go:141] libmachine: (multinode-544936)   </os>
	I1225 12:38:48.085714 1463142 main.go:141] libmachine: (multinode-544936)   <devices>
	I1225 12:38:48.085730 1463142 main.go:141] libmachine: (multinode-544936)     <disk type='file' device='cdrom'>
	I1225 12:38:48.085749 1463142 main.go:141] libmachine: (multinode-544936)       <source file='/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/boot2docker.iso'/>
	I1225 12:38:48.085763 1463142 main.go:141] libmachine: (multinode-544936)       <target dev='hdc' bus='scsi'/>
	I1225 12:38:48.085777 1463142 main.go:141] libmachine: (multinode-544936)       <readonly/>
	I1225 12:38:48.085786 1463142 main.go:141] libmachine: (multinode-544936)     </disk>
	I1225 12:38:48.085797 1463142 main.go:141] libmachine: (multinode-544936)     <disk type='file' device='disk'>
	I1225 12:38:48.085814 1463142 main.go:141] libmachine: (multinode-544936)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1225 12:38:48.085833 1463142 main.go:141] libmachine: (multinode-544936)       <source file='/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/multinode-544936.rawdisk'/>
	I1225 12:38:48.085846 1463142 main.go:141] libmachine: (multinode-544936)       <target dev='hda' bus='virtio'/>
	I1225 12:38:48.085858 1463142 main.go:141] libmachine: (multinode-544936)     </disk>
	I1225 12:38:48.085869 1463142 main.go:141] libmachine: (multinode-544936)     <interface type='network'>
	I1225 12:38:48.085897 1463142 main.go:141] libmachine: (multinode-544936)       <source network='mk-multinode-544936'/>
	I1225 12:38:48.085922 1463142 main.go:141] libmachine: (multinode-544936)       <model type='virtio'/>
	I1225 12:38:48.085933 1463142 main.go:141] libmachine: (multinode-544936)     </interface>
	I1225 12:38:48.085946 1463142 main.go:141] libmachine: (multinode-544936)     <interface type='network'>
	I1225 12:38:48.085973 1463142 main.go:141] libmachine: (multinode-544936)       <source network='default'/>
	I1225 12:38:48.085995 1463142 main.go:141] libmachine: (multinode-544936)       <model type='virtio'/>
	I1225 12:38:48.086008 1463142 main.go:141] libmachine: (multinode-544936)     </interface>
	I1225 12:38:48.086020 1463142 main.go:141] libmachine: (multinode-544936)     <serial type='pty'>
	I1225 12:38:48.086034 1463142 main.go:141] libmachine: (multinode-544936)       <target port='0'/>
	I1225 12:38:48.086043 1463142 main.go:141] libmachine: (multinode-544936)     </serial>
	I1225 12:38:48.086050 1463142 main.go:141] libmachine: (multinode-544936)     <console type='pty'>
	I1225 12:38:48.086068 1463142 main.go:141] libmachine: (multinode-544936)       <target type='serial' port='0'/>
	I1225 12:38:48.086082 1463142 main.go:141] libmachine: (multinode-544936)     </console>
	I1225 12:38:48.086094 1463142 main.go:141] libmachine: (multinode-544936)     <rng model='virtio'>
	I1225 12:38:48.086110 1463142 main.go:141] libmachine: (multinode-544936)       <backend model='random'>/dev/random</backend>
	I1225 12:38:48.086121 1463142 main.go:141] libmachine: (multinode-544936)     </rng>
	I1225 12:38:48.086136 1463142 main.go:141] libmachine: (multinode-544936)     
	I1225 12:38:48.086153 1463142 main.go:141] libmachine: (multinode-544936)     
	I1225 12:38:48.086167 1463142 main.go:141] libmachine: (multinode-544936)   </devices>
	I1225 12:38:48.086179 1463142 main.go:141] libmachine: (multinode-544936) </domain>
	I1225 12:38:48.086194 1463142 main.go:141] libmachine: (multinode-544936) 
	I1225 12:38:48.090369 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:ae:12:53 in network default
	I1225 12:38:48.090961 1463142 main.go:141] libmachine: (multinode-544936) Ensuring networks are active...
	I1225 12:38:48.090988 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:48.091884 1463142 main.go:141] libmachine: (multinode-544936) Ensuring network default is active
	I1225 12:38:48.092178 1463142 main.go:141] libmachine: (multinode-544936) Ensuring network mk-multinode-544936 is active
	I1225 12:38:48.092667 1463142 main.go:141] libmachine: (multinode-544936) Getting domain xml...
	I1225 12:38:48.093377 1463142 main.go:141] libmachine: (multinode-544936) Creating domain...
	I1225 12:38:49.348290 1463142 main.go:141] libmachine: (multinode-544936) Waiting to get IP...
	I1225 12:38:49.349183 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:49.349784 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:38:49.349809 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:49.349764 1463165 retry.go:31] will retry after 240.350149ms: waiting for machine to come up
	I1225 12:38:49.592422 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:49.592857 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:38:49.592883 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:49.592814 1463165 retry.go:31] will retry after 347.08574ms: waiting for machine to come up
	I1225 12:38:49.941628 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:49.942115 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:38:49.942142 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:49.942051 1463165 retry.go:31] will retry after 468.256687ms: waiting for machine to come up
	I1225 12:38:50.411701 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:50.412151 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:38:50.412176 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:50.412080 1463165 retry.go:31] will retry after 579.117661ms: waiting for machine to come up
	I1225 12:38:50.992869 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:50.993392 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:38:50.993422 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:50.993338 1463165 retry.go:31] will retry after 676.883753ms: waiting for machine to come up
	I1225 12:38:51.672619 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:51.673319 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:38:51.673352 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:51.673237 1463165 retry.go:31] will retry after 745.166614ms: waiting for machine to come up
	I1225 12:38:52.419808 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:52.420321 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:38:52.420349 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:52.420273 1463165 retry.go:31] will retry after 939.264095ms: waiting for machine to come up
	I1225 12:38:53.360956 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:53.361445 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:38:53.361474 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:53.361414 1463165 retry.go:31] will retry after 1.103266671s: waiting for machine to come up
	I1225 12:38:54.467027 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:54.467455 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:38:54.467487 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:54.467417 1463165 retry.go:31] will retry after 1.163762619s: waiting for machine to come up
	I1225 12:38:55.634508 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:55.635021 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:38:55.635054 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:55.634987 1463165 retry.go:31] will retry after 1.555363144s: waiting for machine to come up
	I1225 12:38:57.193186 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:57.193758 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:38:57.193790 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:57.193692 1463165 retry.go:31] will retry after 1.802524781s: waiting for machine to come up
	I1225 12:38:58.998246 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:38:58.998778 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:38:58.998800 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:38:58.998745 1463165 retry.go:31] will retry after 2.644838132s: waiting for machine to come up
	I1225 12:39:01.646632 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:01.647209 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:39:01.647237 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:39:01.647136 1463165 retry.go:31] will retry after 4.352823841s: waiting for machine to come up
	I1225 12:39:06.004319 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:06.004626 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:39:06.004654 1463142 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:39:06.004579 1463165 retry.go:31] will retry after 3.953990753s: waiting for machine to come up
	I1225 12:39:09.961560 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:09.961922 1463142 main.go:141] libmachine: (multinode-544936) Found IP for machine: 192.168.39.21
	I1225 12:39:09.961946 1463142 main.go:141] libmachine: (multinode-544936) Reserving static IP address...
	I1225 12:39:09.961956 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has current primary IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:09.962320 1463142 main.go:141] libmachine: (multinode-544936) DBG | unable to find host DHCP lease matching {name: "multinode-544936", mac: "52:54:00:c0:ee:9c", ip: "192.168.39.21"} in network mk-multinode-544936
	I1225 12:39:10.052039 1463142 main.go:141] libmachine: (multinode-544936) Reserved static IP address: 192.168.39.21
	I1225 12:39:10.052076 1463142 main.go:141] libmachine: (multinode-544936) Waiting for SSH to be available...
	I1225 12:39:10.052086 1463142 main.go:141] libmachine: (multinode-544936) DBG | Getting to WaitForSSH function...
	I1225 12:39:10.054576 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.054944 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:10.054982 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.055049 1463142 main.go:141] libmachine: (multinode-544936) DBG | Using SSH client type: external
	I1225 12:39:10.055077 1463142 main.go:141] libmachine: (multinode-544936) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa (-rw-------)
	I1225 12:39:10.055109 1463142 main.go:141] libmachine: (multinode-544936) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 12:39:10.055144 1463142 main.go:141] libmachine: (multinode-544936) DBG | About to run SSH command:
	I1225 12:39:10.055155 1463142 main.go:141] libmachine: (multinode-544936) DBG | exit 0
	I1225 12:39:10.146016 1463142 main.go:141] libmachine: (multinode-544936) DBG | SSH cmd err, output: <nil>: 
	I1225 12:39:10.146303 1463142 main.go:141] libmachine: (multinode-544936) KVM machine creation complete!
	I1225 12:39:10.146641 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetConfigRaw
	I1225 12:39:10.147205 1463142 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:39:10.147396 1463142 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:39:10.147573 1463142 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1225 12:39:10.147589 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetState
	I1225 12:39:10.148731 1463142 main.go:141] libmachine: Detecting operating system of created instance...
	I1225 12:39:10.148746 1463142 main.go:141] libmachine: Waiting for SSH to be available...
	I1225 12:39:10.148753 1463142 main.go:141] libmachine: Getting to WaitForSSH function...
	I1225 12:39:10.148759 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:10.151064 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.151493 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:10.151528 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.151637 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:39:10.151821 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:10.152001 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:10.152138 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:39:10.152339 1463142 main.go:141] libmachine: Using SSH client type: native
	I1225 12:39:10.152686 1463142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I1225 12:39:10.152699 1463142 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1225 12:39:10.273872 1463142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 12:39:10.273908 1463142 main.go:141] libmachine: Detecting the provisioner...
	I1225 12:39:10.273919 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:10.277198 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.277592 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:10.277630 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.277802 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:39:10.278024 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:10.278354 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:10.278567 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:39:10.278783 1463142 main.go:141] libmachine: Using SSH client type: native
	I1225 12:39:10.279129 1463142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I1225 12:39:10.279144 1463142 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1225 12:39:10.399201 1463142 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1225 12:39:10.399293 1463142 main.go:141] libmachine: found compatible host: buildroot
	I1225 12:39:10.399305 1463142 main.go:141] libmachine: Provisioning with buildroot...
	I1225 12:39:10.399317 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetMachineName
	I1225 12:39:10.399633 1463142 buildroot.go:166] provisioning hostname "multinode-544936"
	I1225 12:39:10.399675 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetMachineName
	I1225 12:39:10.399911 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:10.403116 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.403523 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:10.403561 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.403779 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:39:10.403991 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:10.404201 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:10.404308 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:39:10.404478 1463142 main.go:141] libmachine: Using SSH client type: native
	I1225 12:39:10.404831 1463142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I1225 12:39:10.404846 1463142 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-544936 && echo "multinode-544936" | sudo tee /etc/hostname
	I1225 12:39:10.542101 1463142 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-544936
	
	I1225 12:39:10.542132 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:10.545303 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.545630 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:10.545656 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.545906 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:39:10.546132 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:10.546329 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:10.546553 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:39:10.546716 1463142 main.go:141] libmachine: Using SSH client type: native
	I1225 12:39:10.547076 1463142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I1225 12:39:10.547104 1463142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-544936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-544936/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-544936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 12:39:10.675689 1463142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 12:39:10.675729 1463142 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 12:39:10.675754 1463142 buildroot.go:174] setting up certificates
	I1225 12:39:10.675765 1463142 provision.go:83] configureAuth start
	I1225 12:39:10.675775 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetMachineName
	I1225 12:39:10.676098 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetIP
	I1225 12:39:10.679168 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.679545 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:10.679570 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.679705 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:10.682117 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.682455 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:10.682487 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.682615 1463142 provision.go:138] copyHostCerts
	I1225 12:39:10.682664 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 12:39:10.682705 1463142 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 12:39:10.682719 1463142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 12:39:10.682777 1463142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 12:39:10.682867 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 12:39:10.682885 1463142 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 12:39:10.682891 1463142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 12:39:10.682910 1463142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 12:39:10.682962 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 12:39:10.682977 1463142 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 12:39:10.682983 1463142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 12:39:10.683000 1463142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 12:39:10.683058 1463142 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.multinode-544936 san=[192.168.39.21 192.168.39.21 localhost 127.0.0.1 minikube multinode-544936]
	I1225 12:39:10.868408 1463142 provision.go:172] copyRemoteCerts
	I1225 12:39:10.868493 1463142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 12:39:10.868521 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:10.871451 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.871748 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:10.871787 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:10.871979 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:39:10.872211 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:10.872376 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:39:10.872513 1463142 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:39:10.963306 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1225 12:39:10.963411 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 12:39:10.988288 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1225 12:39:10.988387 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 12:39:11.013167 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1225 12:39:11.013239 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1225 12:39:11.037103 1463142 provision.go:86] duration metric: configureAuth took 361.324167ms
	I1225 12:39:11.037138 1463142 buildroot.go:189] setting minikube options for container-runtime
	I1225 12:39:11.037394 1463142 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:39:11.037484 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:11.040297 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.040626 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:11.040659 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.040955 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:39:11.041163 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:11.041331 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:11.041454 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:39:11.041611 1463142 main.go:141] libmachine: Using SSH client type: native
	I1225 12:39:11.041985 1463142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I1225 12:39:11.042004 1463142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 12:39:11.347028 1463142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 12:39:11.347058 1463142 main.go:141] libmachine: Checking connection to Docker...
	I1225 12:39:11.347071 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetURL
	I1225 12:39:11.348473 1463142 main.go:141] libmachine: (multinode-544936) DBG | Using libvirt version 6000000
	I1225 12:39:11.351023 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.351398 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:11.351448 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.351589 1463142 main.go:141] libmachine: Docker is up and running!
	I1225 12:39:11.351603 1463142 main.go:141] libmachine: Reticulating splines...
	I1225 12:39:11.351610 1463142 client.go:171] LocalClient.Create took 23.710256834s
	I1225 12:39:11.351636 1463142 start.go:167] duration metric: libmachine.API.Create for "multinode-544936" took 23.710351456s
	I1225 12:39:11.351646 1463142 start.go:300] post-start starting for "multinode-544936" (driver="kvm2")
	I1225 12:39:11.351655 1463142 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 12:39:11.351672 1463142 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:39:11.351943 1463142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 12:39:11.351972 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:11.354269 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.354608 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:11.354653 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.354773 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:39:11.354980 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:11.355149 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:39:11.355295 1463142 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:39:11.443820 1463142 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 12:39:11.447840 1463142 command_runner.go:130] > NAME=Buildroot
	I1225 12:39:11.447860 1463142 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1225 12:39:11.447864 1463142 command_runner.go:130] > ID=buildroot
	I1225 12:39:11.447869 1463142 command_runner.go:130] > VERSION_ID=2021.02.12
	I1225 12:39:11.447874 1463142 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1225 12:39:11.448079 1463142 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 12:39:11.448097 1463142 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 12:39:11.448188 1463142 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 12:39:11.448308 1463142 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 12:39:11.448321 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> /etc/ssl/certs/14497972.pem
	I1225 12:39:11.448432 1463142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 12:39:11.456736 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 12:39:11.479723 1463142 start.go:303] post-start completed in 128.062397ms
	I1225 12:39:11.479780 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetConfigRaw
	I1225 12:39:11.480476 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetIP
	I1225 12:39:11.483267 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.483617 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:11.483647 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.483943 1463142 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/config.json ...
	I1225 12:39:11.484121 1463142 start.go:128] duration metric: createHost completed in 23.861425629s
	I1225 12:39:11.484147 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:11.486616 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.486934 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:11.486955 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.487063 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:39:11.487276 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:11.487434 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:11.487594 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:39:11.487735 1463142 main.go:141] libmachine: Using SSH client type: native
	I1225 12:39:11.488070 1463142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I1225 12:39:11.488086 1463142 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 12:39:11.607317 1463142 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703507951.586717983
	
	I1225 12:39:11.607344 1463142 fix.go:206] guest clock: 1703507951.586717983
	I1225 12:39:11.607355 1463142 fix.go:219] Guest: 2023-12-25 12:39:11.586717983 +0000 UTC Remote: 2023-12-25 12:39:11.48413363 +0000 UTC m=+23.988150276 (delta=102.584353ms)
	I1225 12:39:11.607383 1463142 fix.go:190] guest clock delta is within tolerance: 102.584353ms
	I1225 12:39:11.607391 1463142 start.go:83] releasing machines lock for "multinode-544936", held for 23.984772894s
	I1225 12:39:11.607419 1463142 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:39:11.607740 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetIP
	I1225 12:39:11.610413 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.610763 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:11.610791 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.610974 1463142 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:39:11.611518 1463142 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:39:11.611683 1463142 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:39:11.611791 1463142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 12:39:11.611830 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:11.611933 1463142 ssh_runner.go:195] Run: cat /version.json
	I1225 12:39:11.611968 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:11.614660 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.614720 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.615073 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:11.615113 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:11.615141 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.615155 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:11.615355 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:39:11.615359 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:39:11.615581 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:11.615585 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:11.615740 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:39:11.615763 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:39:11.615901 1463142 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:39:11.615905 1463142 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:39:11.699655 1463142 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I1225 12:39:11.699840 1463142 ssh_runner.go:195] Run: systemctl --version
	I1225 12:39:11.722648 1463142 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1225 12:39:11.723521 1463142 command_runner.go:130] > systemd 247 (247)
	I1225 12:39:11.723544 1463142 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1225 12:39:11.723602 1463142 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 12:39:11.884324 1463142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1225 12:39:11.890546 1463142 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1225 12:39:11.890611 1463142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 12:39:11.890686 1463142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 12:39:11.906175 1463142 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1225 12:39:11.906231 1463142 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 12:39:11.906243 1463142 start.go:475] detecting cgroup driver to use...
	I1225 12:39:11.906323 1463142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 12:39:11.920286 1463142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 12:39:11.933792 1463142 docker.go:203] disabling cri-docker service (if available) ...
	I1225 12:39:11.933868 1463142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 12:39:11.947654 1463142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 12:39:11.961359 1463142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 12:39:11.976056 1463142 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1225 12:39:12.067154 1463142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 12:39:12.079960 1463142 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1225 12:39:12.183371 1463142 docker.go:219] disabling docker service ...
	I1225 12:39:12.183466 1463142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 12:39:12.199660 1463142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 12:39:12.212515 1463142 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1225 12:39:12.212633 1463142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 12:39:12.318612 1463142 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1225 12:39:12.318729 1463142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 12:39:12.332905 1463142 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1225 12:39:12.333194 1463142 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1225 12:39:12.420948 1463142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 12:39:12.434582 1463142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 12:39:12.451911 1463142 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1225 12:39:12.451977 1463142 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 12:39:12.452047 1463142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:39:12.462627 1463142 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 12:39:12.462697 1463142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:39:12.473177 1463142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:39:12.483583 1463142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:39:12.494069 1463142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 12:39:12.504830 1463142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 12:39:12.514082 1463142 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 12:39:12.514200 1463142 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 12:39:12.514262 1463142 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 12:39:12.528631 1463142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 12:39:12.538754 1463142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 12:39:12.644579 1463142 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 12:39:12.816042 1463142 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 12:39:12.816147 1463142 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 12:39:12.822351 1463142 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1225 12:39:12.822386 1463142 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1225 12:39:12.822398 1463142 command_runner.go:130] > Device: 16h/22d	Inode: 794         Links: 1
	I1225 12:39:12.822412 1463142 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1225 12:39:12.822421 1463142 command_runner.go:130] > Access: 2023-12-25 12:39:12.783554431 +0000
	I1225 12:39:12.822444 1463142 command_runner.go:130] > Modify: 2023-12-25 12:39:12.783554431 +0000
	I1225 12:39:12.822457 1463142 command_runner.go:130] > Change: 2023-12-25 12:39:12.783554431 +0000
	I1225 12:39:12.822467 1463142 command_runner.go:130] >  Birth: -
	I1225 12:39:12.822490 1463142 start.go:543] Will wait 60s for crictl version
	I1225 12:39:12.822551 1463142 ssh_runner.go:195] Run: which crictl
	I1225 12:39:12.826711 1463142 command_runner.go:130] > /usr/bin/crictl
	I1225 12:39:12.826793 1463142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 12:39:12.869488 1463142 command_runner.go:130] > Version:  0.1.0
	I1225 12:39:12.869548 1463142 command_runner.go:130] > RuntimeName:  cri-o
	I1225 12:39:12.869665 1463142 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1225 12:39:12.869742 1463142 command_runner.go:130] > RuntimeApiVersion:  v1
	I1225 12:39:12.871520 1463142 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 12:39:12.871588 1463142 ssh_runner.go:195] Run: crio --version
	I1225 12:39:12.915585 1463142 command_runner.go:130] > crio version 1.24.1
	I1225 12:39:12.915613 1463142 command_runner.go:130] > Version:          1.24.1
	I1225 12:39:12.915620 1463142 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1225 12:39:12.915624 1463142 command_runner.go:130] > GitTreeState:     dirty
	I1225 12:39:12.915630 1463142 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I1225 12:39:12.915634 1463142 command_runner.go:130] > GoVersion:        go1.19.9
	I1225 12:39:12.915638 1463142 command_runner.go:130] > Compiler:         gc
	I1225 12:39:12.915643 1463142 command_runner.go:130] > Platform:         linux/amd64
	I1225 12:39:12.915662 1463142 command_runner.go:130] > Linkmode:         dynamic
	I1225 12:39:12.915669 1463142 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1225 12:39:12.915673 1463142 command_runner.go:130] > SeccompEnabled:   true
	I1225 12:39:12.915677 1463142 command_runner.go:130] > AppArmorEnabled:  false
	I1225 12:39:12.916900 1463142 ssh_runner.go:195] Run: crio --version
	I1225 12:39:12.962566 1463142 command_runner.go:130] > crio version 1.24.1
	I1225 12:39:12.962606 1463142 command_runner.go:130] > Version:          1.24.1
	I1225 12:39:12.962620 1463142 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1225 12:39:12.962627 1463142 command_runner.go:130] > GitTreeState:     dirty
	I1225 12:39:12.962636 1463142 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I1225 12:39:12.962644 1463142 command_runner.go:130] > GoVersion:        go1.19.9
	I1225 12:39:12.962652 1463142 command_runner.go:130] > Compiler:         gc
	I1225 12:39:12.962659 1463142 command_runner.go:130] > Platform:         linux/amd64
	I1225 12:39:12.962670 1463142 command_runner.go:130] > Linkmode:         dynamic
	I1225 12:39:12.962686 1463142 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1225 12:39:12.962703 1463142 command_runner.go:130] > SeccompEnabled:   true
	I1225 12:39:12.962714 1463142 command_runner.go:130] > AppArmorEnabled:  false
	I1225 12:39:12.964783 1463142 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 12:39:12.966146 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetIP
	I1225 12:39:12.968798 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:12.969196 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:12.969227 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:12.969508 1463142 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 12:39:12.974034 1463142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 12:39:12.986207 1463142 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 12:39:12.986291 1463142 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 12:39:13.030289 1463142 command_runner.go:130] > {
	I1225 12:39:13.030314 1463142 command_runner.go:130] >   "images": [
	I1225 12:39:13.030318 1463142 command_runner.go:130] >   ]
	I1225 12:39:13.030321 1463142 command_runner.go:130] > }
	I1225 12:39:13.030558 1463142 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 12:39:13.030623 1463142 ssh_runner.go:195] Run: which lz4
	I1225 12:39:13.034279 1463142 command_runner.go:130] > /usr/bin/lz4
	I1225 12:39:13.034431 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1225 12:39:13.034547 1463142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 12:39:13.038525 1463142 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 12:39:13.038751 1463142 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 12:39:13.038780 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 12:39:14.900003 1463142 crio.go:444] Took 1.865489 seconds to copy over tarball
	I1225 12:39:14.900080 1463142 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 12:39:17.747004 1463142 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.846891384s)
	I1225 12:39:17.747044 1463142 crio.go:451] Took 2.847010 seconds to extract the tarball
	I1225 12:39:17.747057 1463142 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 12:39:17.788572 1463142 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 12:39:17.862374 1463142 command_runner.go:130] > {
	I1225 12:39:17.862408 1463142 command_runner.go:130] >   "images": [
	I1225 12:39:17.862415 1463142 command_runner.go:130] >     {
	I1225 12:39:17.862429 1463142 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1225 12:39:17.862453 1463142 command_runner.go:130] >       "repoTags": [
	I1225 12:39:17.862464 1463142 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1225 12:39:17.862474 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.862481 1463142 command_runner.go:130] >       "repoDigests": [
	I1225 12:39:17.862500 1463142 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1225 12:39:17.862515 1463142 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1225 12:39:17.862525 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.862536 1463142 command_runner.go:130] >       "size": "65258016",
	I1225 12:39:17.862546 1463142 command_runner.go:130] >       "uid": null,
	I1225 12:39:17.862563 1463142 command_runner.go:130] >       "username": "",
	I1225 12:39:17.862580 1463142 command_runner.go:130] >       "spec": null,
	I1225 12:39:17.862590 1463142 command_runner.go:130] >       "pinned": false
	I1225 12:39:17.862600 1463142 command_runner.go:130] >     },
	I1225 12:39:17.862610 1463142 command_runner.go:130] >     {
	I1225 12:39:17.862623 1463142 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1225 12:39:17.862633 1463142 command_runner.go:130] >       "repoTags": [
	I1225 12:39:17.862645 1463142 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1225 12:39:17.862667 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.862678 1463142 command_runner.go:130] >       "repoDigests": [
	I1225 12:39:17.862694 1463142 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1225 12:39:17.862710 1463142 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1225 12:39:17.862719 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.862733 1463142 command_runner.go:130] >       "size": "31470524",
	I1225 12:39:17.862742 1463142 command_runner.go:130] >       "uid": null,
	I1225 12:39:17.862752 1463142 command_runner.go:130] >       "username": "",
	I1225 12:39:17.862761 1463142 command_runner.go:130] >       "spec": null,
	I1225 12:39:17.862770 1463142 command_runner.go:130] >       "pinned": false
	I1225 12:39:17.862781 1463142 command_runner.go:130] >     },
	I1225 12:39:17.862789 1463142 command_runner.go:130] >     {
	I1225 12:39:17.862797 1463142 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1225 12:39:17.862807 1463142 command_runner.go:130] >       "repoTags": [
	I1225 12:39:17.862818 1463142 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1225 12:39:17.862827 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.862836 1463142 command_runner.go:130] >       "repoDigests": [
	I1225 12:39:17.862853 1463142 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1225 12:39:17.862868 1463142 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1225 12:39:17.862877 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.862923 1463142 command_runner.go:130] >       "size": "53621675",
	I1225 12:39:17.862958 1463142 command_runner.go:130] >       "uid": null,
	I1225 12:39:17.862965 1463142 command_runner.go:130] >       "username": "",
	I1225 12:39:17.862975 1463142 command_runner.go:130] >       "spec": null,
	I1225 12:39:17.862982 1463142 command_runner.go:130] >       "pinned": false
	I1225 12:39:17.862990 1463142 command_runner.go:130] >     },
	I1225 12:39:17.862999 1463142 command_runner.go:130] >     {
	I1225 12:39:17.863012 1463142 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1225 12:39:17.863028 1463142 command_runner.go:130] >       "repoTags": [
	I1225 12:39:17.863039 1463142 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1225 12:39:17.863048 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.863059 1463142 command_runner.go:130] >       "repoDigests": [
	I1225 12:39:17.863075 1463142 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1225 12:39:17.863090 1463142 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1225 12:39:17.863108 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.863118 1463142 command_runner.go:130] >       "size": "295456551",
	I1225 12:39:17.863127 1463142 command_runner.go:130] >       "uid": {
	I1225 12:39:17.863137 1463142 command_runner.go:130] >         "value": "0"
	I1225 12:39:17.863146 1463142 command_runner.go:130] >       },
	I1225 12:39:17.863155 1463142 command_runner.go:130] >       "username": "",
	I1225 12:39:17.863168 1463142 command_runner.go:130] >       "spec": null,
	I1225 12:39:17.863178 1463142 command_runner.go:130] >       "pinned": false
	I1225 12:39:17.863187 1463142 command_runner.go:130] >     },
	I1225 12:39:17.863196 1463142 command_runner.go:130] >     {
	I1225 12:39:17.863207 1463142 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1225 12:39:17.863217 1463142 command_runner.go:130] >       "repoTags": [
	I1225 12:39:17.863231 1463142 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1225 12:39:17.863241 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.863251 1463142 command_runner.go:130] >       "repoDigests": [
	I1225 12:39:17.863267 1463142 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1225 12:39:17.863283 1463142 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1225 12:39:17.863292 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.863299 1463142 command_runner.go:130] >       "size": "127226832",
	I1225 12:39:17.863309 1463142 command_runner.go:130] >       "uid": {
	I1225 12:39:17.863319 1463142 command_runner.go:130] >         "value": "0"
	I1225 12:39:17.863329 1463142 command_runner.go:130] >       },
	I1225 12:39:17.863339 1463142 command_runner.go:130] >       "username": "",
	I1225 12:39:17.863348 1463142 command_runner.go:130] >       "spec": null,
	I1225 12:39:17.863358 1463142 command_runner.go:130] >       "pinned": false
	I1225 12:39:17.863367 1463142 command_runner.go:130] >     },
	I1225 12:39:17.863373 1463142 command_runner.go:130] >     {
	I1225 12:39:17.863386 1463142 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1225 12:39:17.863395 1463142 command_runner.go:130] >       "repoTags": [
	I1225 12:39:17.863406 1463142 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1225 12:39:17.863419 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.863429 1463142 command_runner.go:130] >       "repoDigests": [
	I1225 12:39:17.863444 1463142 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1225 12:39:17.863459 1463142 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1225 12:39:17.863468 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.863475 1463142 command_runner.go:130] >       "size": "123261750",
	I1225 12:39:17.863484 1463142 command_runner.go:130] >       "uid": {
	I1225 12:39:17.863493 1463142 command_runner.go:130] >         "value": "0"
	I1225 12:39:17.863502 1463142 command_runner.go:130] >       },
	I1225 12:39:17.863511 1463142 command_runner.go:130] >       "username": "",
	I1225 12:39:17.863520 1463142 command_runner.go:130] >       "spec": null,
	I1225 12:39:17.863530 1463142 command_runner.go:130] >       "pinned": false
	I1225 12:39:17.863538 1463142 command_runner.go:130] >     },
	I1225 12:39:17.863547 1463142 command_runner.go:130] >     {
	I1225 12:39:17.863561 1463142 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1225 12:39:17.863570 1463142 command_runner.go:130] >       "repoTags": [
	I1225 12:39:17.863581 1463142 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1225 12:39:17.863590 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.863604 1463142 command_runner.go:130] >       "repoDigests": [
	I1225 12:39:17.863620 1463142 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1225 12:39:17.863641 1463142 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1225 12:39:17.863656 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.863666 1463142 command_runner.go:130] >       "size": "74749335",
	I1225 12:39:17.863675 1463142 command_runner.go:130] >       "uid": null,
	I1225 12:39:17.863686 1463142 command_runner.go:130] >       "username": "",
	I1225 12:39:17.863696 1463142 command_runner.go:130] >       "spec": null,
	I1225 12:39:17.863706 1463142 command_runner.go:130] >       "pinned": false
	I1225 12:39:17.863714 1463142 command_runner.go:130] >     },
	I1225 12:39:17.863722 1463142 command_runner.go:130] >     {
	I1225 12:39:17.863734 1463142 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1225 12:39:17.863744 1463142 command_runner.go:130] >       "repoTags": [
	I1225 12:39:17.863751 1463142 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1225 12:39:17.863764 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.863771 1463142 command_runner.go:130] >       "repoDigests": [
	I1225 12:39:17.863805 1463142 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1225 12:39:17.863818 1463142 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1225 12:39:17.863837 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.863845 1463142 command_runner.go:130] >       "size": "61551410",
	I1225 12:39:17.863868 1463142 command_runner.go:130] >       "uid": {
	I1225 12:39:17.863878 1463142 command_runner.go:130] >         "value": "0"
	I1225 12:39:17.863884 1463142 command_runner.go:130] >       },
	I1225 12:39:17.863890 1463142 command_runner.go:130] >       "username": "",
	I1225 12:39:17.863898 1463142 command_runner.go:130] >       "spec": null,
	I1225 12:39:17.863905 1463142 command_runner.go:130] >       "pinned": false
	I1225 12:39:17.863914 1463142 command_runner.go:130] >     },
	I1225 12:39:17.863920 1463142 command_runner.go:130] >     {
	I1225 12:39:17.863930 1463142 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1225 12:39:17.863939 1463142 command_runner.go:130] >       "repoTags": [
	I1225 12:39:17.863948 1463142 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1225 12:39:17.863957 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.863964 1463142 command_runner.go:130] >       "repoDigests": [
	I1225 12:39:17.863978 1463142 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1225 12:39:17.863991 1463142 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1225 12:39:17.864000 1463142 command_runner.go:130] >       ],
	I1225 12:39:17.864014 1463142 command_runner.go:130] >       "size": "750414",
	I1225 12:39:17.864023 1463142 command_runner.go:130] >       "uid": {
	I1225 12:39:17.864033 1463142 command_runner.go:130] >         "value": "65535"
	I1225 12:39:17.864041 1463142 command_runner.go:130] >       },
	I1225 12:39:17.864051 1463142 command_runner.go:130] >       "username": "",
	I1225 12:39:17.864061 1463142 command_runner.go:130] >       "spec": null,
	I1225 12:39:17.864071 1463142 command_runner.go:130] >       "pinned": false
	I1225 12:39:17.864076 1463142 command_runner.go:130] >     }
	I1225 12:39:17.864085 1463142 command_runner.go:130] >   ]
	I1225 12:39:17.864092 1463142 command_runner.go:130] > }
	I1225 12:39:17.864260 1463142 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 12:39:17.864278 1463142 cache_images.go:84] Images are preloaded, skipping loading
	I1225 12:39:17.864356 1463142 ssh_runner.go:195] Run: crio config
	I1225 12:39:17.918295 1463142 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1225 12:39:17.918324 1463142 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1225 12:39:17.918331 1463142 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1225 12:39:17.918334 1463142 command_runner.go:130] > #
	I1225 12:39:17.918341 1463142 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1225 12:39:17.918347 1463142 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1225 12:39:17.918353 1463142 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1225 12:39:17.918359 1463142 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1225 12:39:17.918363 1463142 command_runner.go:130] > # reload'.
	I1225 12:39:17.918369 1463142 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1225 12:39:17.918390 1463142 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1225 12:39:17.918400 1463142 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1225 12:39:17.918413 1463142 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1225 12:39:17.918419 1463142 command_runner.go:130] > [crio]
	I1225 12:39:17.918429 1463142 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1225 12:39:17.918455 1463142 command_runner.go:130] > # containers images, in this directory.
	I1225 12:39:17.918470 1463142 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1225 12:39:17.918487 1463142 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1225 12:39:17.918629 1463142 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1225 12:39:17.918656 1463142 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1225 12:39:17.918668 1463142 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1225 12:39:17.918850 1463142 command_runner.go:130] > storage_driver = "overlay"
	I1225 12:39:17.918868 1463142 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1225 12:39:17.918880 1463142 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1225 12:39:17.918891 1463142 command_runner.go:130] > storage_option = [
	I1225 12:39:17.919081 1463142 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1225 12:39:17.919171 1463142 command_runner.go:130] > ]
	I1225 12:39:17.919189 1463142 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1225 12:39:17.919204 1463142 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1225 12:39:17.919510 1463142 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1225 12:39:17.919526 1463142 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1225 12:39:17.919536 1463142 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1225 12:39:17.919544 1463142 command_runner.go:130] > # always happen on a node reboot
	I1225 12:39:17.919808 1463142 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1225 12:39:17.919823 1463142 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1225 12:39:17.919834 1463142 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1225 12:39:17.919854 1463142 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1225 12:39:17.920334 1463142 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1225 12:39:17.920358 1463142 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1225 12:39:17.920372 1463142 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1225 12:39:17.920696 1463142 command_runner.go:130] > # internal_wipe = true
	I1225 12:39:17.920723 1463142 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1225 12:39:17.920734 1463142 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1225 12:39:17.920749 1463142 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1225 12:39:17.921217 1463142 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1225 12:39:17.921259 1463142 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1225 12:39:17.921268 1463142 command_runner.go:130] > [crio.api]
	I1225 12:39:17.921283 1463142 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1225 12:39:17.921421 1463142 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1225 12:39:17.921433 1463142 command_runner.go:130] > # IP address on which the stream server will listen.
	I1225 12:39:17.921662 1463142 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1225 12:39:17.921696 1463142 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1225 12:39:17.921711 1463142 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1225 12:39:17.922045 1463142 command_runner.go:130] > # stream_port = "0"
	I1225 12:39:17.922062 1463142 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1225 12:39:17.922251 1463142 command_runner.go:130] > # stream_enable_tls = false
	I1225 12:39:17.922267 1463142 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1225 12:39:17.922495 1463142 command_runner.go:130] > # stream_idle_timeout = ""
	I1225 12:39:17.922515 1463142 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1225 12:39:17.922529 1463142 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1225 12:39:17.922539 1463142 command_runner.go:130] > # minutes.
	I1225 12:39:17.922772 1463142 command_runner.go:130] > # stream_tls_cert = ""
	I1225 12:39:17.922788 1463142 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1225 12:39:17.922832 1463142 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1225 12:39:17.923178 1463142 command_runner.go:130] > # stream_tls_key = ""
	I1225 12:39:17.923200 1463142 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1225 12:39:17.923211 1463142 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1225 12:39:17.923220 1463142 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1225 12:39:17.923443 1463142 command_runner.go:130] > # stream_tls_ca = ""
	I1225 12:39:17.923459 1463142 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1225 12:39:17.923567 1463142 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1225 12:39:17.923585 1463142 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1225 12:39:17.923713 1463142 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1225 12:39:17.923752 1463142 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1225 12:39:17.923770 1463142 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1225 12:39:17.923777 1463142 command_runner.go:130] > [crio.runtime]
	I1225 12:39:17.923787 1463142 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1225 12:39:17.923793 1463142 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1225 12:39:17.923798 1463142 command_runner.go:130] > # "nofile=1024:2048"
	I1225 12:39:17.923808 1463142 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1225 12:39:17.924038 1463142 command_runner.go:130] > # default_ulimits = [
	I1225 12:39:17.924093 1463142 command_runner.go:130] > # ]
	I1225 12:39:17.924113 1463142 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1225 12:39:17.924120 1463142 command_runner.go:130] > # no_pivot = false
	I1225 12:39:17.924133 1463142 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1225 12:39:17.924149 1463142 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1225 12:39:17.924161 1463142 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1225 12:39:17.924173 1463142 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1225 12:39:17.924184 1463142 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1225 12:39:17.924196 1463142 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1225 12:39:17.924218 1463142 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1225 12:39:17.924229 1463142 command_runner.go:130] > # Cgroup setting for conmon
	I1225 12:39:17.924241 1463142 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1225 12:39:17.924251 1463142 command_runner.go:130] > conmon_cgroup = "pod"
	I1225 12:39:17.924264 1463142 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1225 12:39:17.924276 1463142 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1225 12:39:17.924288 1463142 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1225 12:39:17.924297 1463142 command_runner.go:130] > conmon_env = [
	I1225 12:39:17.924308 1463142 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1225 12:39:17.924317 1463142 command_runner.go:130] > ]
	I1225 12:39:17.924327 1463142 command_runner.go:130] > # Additional environment variables to set for all the
	I1225 12:39:17.924338 1463142 command_runner.go:130] > # containers. These are overridden if set in the
	I1225 12:39:17.924350 1463142 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1225 12:39:17.924360 1463142 command_runner.go:130] > # default_env = [
	I1225 12:39:17.924365 1463142 command_runner.go:130] > # ]
	I1225 12:39:17.924379 1463142 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1225 12:39:17.924389 1463142 command_runner.go:130] > # selinux = false
	I1225 12:39:17.924399 1463142 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1225 12:39:17.924420 1463142 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1225 12:39:17.924433 1463142 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1225 12:39:17.924442 1463142 command_runner.go:130] > # seccomp_profile = ""
	I1225 12:39:17.924452 1463142 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1225 12:39:17.924464 1463142 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1225 12:39:17.924474 1463142 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1225 12:39:17.924485 1463142 command_runner.go:130] > # which might increase security.
	I1225 12:39:17.924494 1463142 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1225 12:39:17.924508 1463142 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1225 12:39:17.924521 1463142 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1225 12:39:17.924534 1463142 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1225 12:39:17.924548 1463142 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1225 12:39:17.924559 1463142 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:39:17.924570 1463142 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1225 12:39:17.924579 1463142 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1225 12:39:17.924589 1463142 command_runner.go:130] > # the cgroup blockio controller.
	I1225 12:39:17.924600 1463142 command_runner.go:130] > # blockio_config_file = ""
	I1225 12:39:17.924611 1463142 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1225 12:39:17.924624 1463142 command_runner.go:130] > # irqbalance daemon.
	I1225 12:39:17.924636 1463142 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1225 12:39:17.924650 1463142 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1225 12:39:17.924662 1463142 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:39:17.924668 1463142 command_runner.go:130] > # rdt_config_file = ""
	I1225 12:39:17.924688 1463142 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1225 12:39:17.924698 1463142 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1225 12:39:17.924711 1463142 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1225 12:39:17.924721 1463142 command_runner.go:130] > # separate_pull_cgroup = ""
	I1225 12:39:17.924735 1463142 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1225 12:39:17.924748 1463142 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1225 12:39:17.924755 1463142 command_runner.go:130] > # will be added.
	I1225 12:39:17.924766 1463142 command_runner.go:130] > # default_capabilities = [
	I1225 12:39:17.924775 1463142 command_runner.go:130] > # 	"CHOWN",
	I1225 12:39:17.924782 1463142 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1225 12:39:17.924792 1463142 command_runner.go:130] > # 	"FSETID",
	I1225 12:39:17.924801 1463142 command_runner.go:130] > # 	"FOWNER",
	I1225 12:39:17.924808 1463142 command_runner.go:130] > # 	"SETGID",
	I1225 12:39:17.924822 1463142 command_runner.go:130] > # 	"SETUID",
	I1225 12:39:17.924832 1463142 command_runner.go:130] > # 	"SETPCAP",
	I1225 12:39:17.924838 1463142 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1225 12:39:17.924848 1463142 command_runner.go:130] > # 	"KILL",
	I1225 12:39:17.924854 1463142 command_runner.go:130] > # ]
	I1225 12:39:17.924867 1463142 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1225 12:39:17.924883 1463142 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1225 12:39:17.924894 1463142 command_runner.go:130] > # default_sysctls = [
	I1225 12:39:17.924901 1463142 command_runner.go:130] > # ]
	I1225 12:39:17.924909 1463142 command_runner.go:130] > # List of devices on the host that a
	I1225 12:39:17.924921 1463142 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1225 12:39:17.924929 1463142 command_runner.go:130] > # allowed_devices = [
	I1225 12:39:17.924937 1463142 command_runner.go:130] > # 	"/dev/fuse",
	I1225 12:39:17.924943 1463142 command_runner.go:130] > # ]
	I1225 12:39:17.924955 1463142 command_runner.go:130] > # List of additional devices. specified as
	I1225 12:39:17.924970 1463142 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1225 12:39:17.924982 1463142 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1225 12:39:17.925028 1463142 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1225 12:39:17.925042 1463142 command_runner.go:130] > # additional_devices = [
	I1225 12:39:17.925048 1463142 command_runner.go:130] > # ]
	I1225 12:39:17.925057 1463142 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1225 12:39:17.925066 1463142 command_runner.go:130] > # cdi_spec_dirs = [
	I1225 12:39:17.925073 1463142 command_runner.go:130] > # 	"/etc/cdi",
	I1225 12:39:17.925082 1463142 command_runner.go:130] > # 	"/var/run/cdi",
	I1225 12:39:17.925088 1463142 command_runner.go:130] > # ]
	I1225 12:39:17.925099 1463142 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1225 12:39:17.925112 1463142 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1225 12:39:17.925121 1463142 command_runner.go:130] > # Defaults to false.
	I1225 12:39:17.925129 1463142 command_runner.go:130] > # device_ownership_from_security_context = false
	I1225 12:39:17.925143 1463142 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1225 12:39:17.925155 1463142 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1225 12:39:17.925162 1463142 command_runner.go:130] > # hooks_dir = [
	I1225 12:39:17.925172 1463142 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1225 12:39:17.925182 1463142 command_runner.go:130] > # ]
	I1225 12:39:17.925191 1463142 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1225 12:39:17.925205 1463142 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1225 12:39:17.925222 1463142 command_runner.go:130] > # its default mounts from the following two files:
	I1225 12:39:17.925231 1463142 command_runner.go:130] > #
	I1225 12:39:17.925241 1463142 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1225 12:39:17.925254 1463142 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1225 12:39:17.925266 1463142 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1225 12:39:17.925275 1463142 command_runner.go:130] > #
	I1225 12:39:17.925285 1463142 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1225 12:39:17.925299 1463142 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1225 12:39:17.925312 1463142 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1225 12:39:17.925323 1463142 command_runner.go:130] > #      only add mounts it finds in this file.
	I1225 12:39:17.925333 1463142 command_runner.go:130] > #
	I1225 12:39:17.925344 1463142 command_runner.go:130] > # default_mounts_file = ""
	I1225 12:39:17.925356 1463142 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1225 12:39:17.925367 1463142 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1225 12:39:17.925377 1463142 command_runner.go:130] > pids_limit = 1024
	I1225 12:39:17.925391 1463142 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1225 12:39:17.925404 1463142 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1225 12:39:17.925415 1463142 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1225 12:39:17.925441 1463142 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1225 12:39:17.925451 1463142 command_runner.go:130] > # log_size_max = -1
	I1225 12:39:17.925463 1463142 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1225 12:39:17.925473 1463142 command_runner.go:130] > # log_to_journald = false
	I1225 12:39:17.925486 1463142 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1225 12:39:17.925497 1463142 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1225 12:39:17.925508 1463142 command_runner.go:130] > # Path to directory for container attach sockets.
	I1225 12:39:17.925520 1463142 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1225 12:39:17.925528 1463142 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1225 12:39:17.925539 1463142 command_runner.go:130] > # bind_mount_prefix = ""
	I1225 12:39:17.925547 1463142 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1225 12:39:17.925557 1463142 command_runner.go:130] > # read_only = false
	I1225 12:39:17.925567 1463142 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1225 12:39:17.925580 1463142 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1225 12:39:17.925591 1463142 command_runner.go:130] > # live configuration reload.
	I1225 12:39:17.925600 1463142 command_runner.go:130] > # log_level = "info"
	I1225 12:39:17.925610 1463142 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1225 12:39:17.925621 1463142 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:39:17.925633 1463142 command_runner.go:130] > # log_filter = ""
	I1225 12:39:17.925648 1463142 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1225 12:39:17.925665 1463142 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1225 12:39:17.925674 1463142 command_runner.go:130] > # separated by comma.
	I1225 12:39:17.925687 1463142 command_runner.go:130] > # uid_mappings = ""
	I1225 12:39:17.925700 1463142 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1225 12:39:17.925713 1463142 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1225 12:39:17.925720 1463142 command_runner.go:130] > # separated by comma.
	I1225 12:39:17.925729 1463142 command_runner.go:130] > # gid_mappings = ""
	I1225 12:39:17.925739 1463142 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1225 12:39:17.925752 1463142 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1225 12:39:17.925766 1463142 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1225 12:39:17.925776 1463142 command_runner.go:130] > # minimum_mappable_uid = -1
	I1225 12:39:17.925788 1463142 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1225 12:39:17.925801 1463142 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1225 12:39:17.925814 1463142 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1225 12:39:17.925824 1463142 command_runner.go:130] > # minimum_mappable_gid = -1
	I1225 12:39:17.925836 1463142 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1225 12:39:17.925852 1463142 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1225 12:39:17.925864 1463142 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1225 12:39:17.925874 1463142 command_runner.go:130] > # ctr_stop_timeout = 30
	I1225 12:39:17.925884 1463142 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1225 12:39:17.925896 1463142 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1225 12:39:17.925907 1463142 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1225 12:39:17.925918 1463142 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1225 12:39:17.925929 1463142 command_runner.go:130] > drop_infra_ctr = false
	I1225 12:39:17.925938 1463142 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1225 12:39:17.925950 1463142 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1225 12:39:17.925966 1463142 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1225 12:39:17.925975 1463142 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1225 12:39:17.925985 1463142 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1225 12:39:17.925996 1463142 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1225 12:39:17.926004 1463142 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1225 12:39:17.926019 1463142 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1225 12:39:17.926030 1463142 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1225 12:39:17.926044 1463142 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1225 12:39:17.926063 1463142 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1225 12:39:17.926076 1463142 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1225 12:39:17.926086 1463142 command_runner.go:130] > # default_runtime = "runc"
	I1225 12:39:17.926094 1463142 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1225 12:39:17.926110 1463142 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1225 12:39:17.926126 1463142 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1225 12:39:17.926135 1463142 command_runner.go:130] > # creation as a file is not desired either.
	I1225 12:39:17.926146 1463142 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1225 12:39:17.926156 1463142 command_runner.go:130] > # the hostname is being managed dynamically.
	I1225 12:39:17.926170 1463142 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1225 12:39:17.926178 1463142 command_runner.go:130] > # ]
	I1225 12:39:17.926189 1463142 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1225 12:39:17.926202 1463142 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1225 12:39:17.926215 1463142 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1225 12:39:17.926228 1463142 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1225 12:39:17.926234 1463142 command_runner.go:130] > #
	I1225 12:39:17.926242 1463142 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1225 12:39:17.926253 1463142 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1225 12:39:17.926265 1463142 command_runner.go:130] > #  runtime_type = "oci"
	I1225 12:39:17.926281 1463142 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1225 12:39:17.926293 1463142 command_runner.go:130] > #  privileged_without_host_devices = false
	I1225 12:39:17.926302 1463142 command_runner.go:130] > #  allowed_annotations = []
	I1225 12:39:17.926308 1463142 command_runner.go:130] > # Where:
	I1225 12:39:17.926319 1463142 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1225 12:39:17.926325 1463142 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1225 12:39:17.926334 1463142 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1225 12:39:17.926340 1463142 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1225 12:39:17.926346 1463142 command_runner.go:130] > #   in $PATH.
	I1225 12:39:17.926352 1463142 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1225 12:39:17.926359 1463142 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1225 12:39:17.926364 1463142 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1225 12:39:17.926370 1463142 command_runner.go:130] > #   state.
	I1225 12:39:17.926376 1463142 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1225 12:39:17.926384 1463142 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1225 12:39:17.926390 1463142 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1225 12:39:17.926398 1463142 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1225 12:39:17.926407 1463142 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1225 12:39:17.926416 1463142 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1225 12:39:17.926420 1463142 command_runner.go:130] > #   The currently recognized values are:
	I1225 12:39:17.926429 1463142 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1225 12:39:17.926449 1463142 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1225 12:39:17.926468 1463142 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1225 12:39:17.926482 1463142 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1225 12:39:17.926498 1463142 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1225 12:39:17.926512 1463142 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1225 12:39:17.926521 1463142 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1225 12:39:17.926527 1463142 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1225 12:39:17.926535 1463142 command_runner.go:130] > #   should be moved to the container's cgroup
	I1225 12:39:17.926539 1463142 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1225 12:39:17.926545 1463142 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1225 12:39:17.926549 1463142 command_runner.go:130] > runtime_type = "oci"
	I1225 12:39:17.926554 1463142 command_runner.go:130] > runtime_root = "/run/runc"
	I1225 12:39:17.926558 1463142 command_runner.go:130] > runtime_config_path = ""
	I1225 12:39:17.926565 1463142 command_runner.go:130] > monitor_path = ""
	I1225 12:39:17.926571 1463142 command_runner.go:130] > monitor_cgroup = ""
	I1225 12:39:17.926578 1463142 command_runner.go:130] > monitor_exec_cgroup = ""
	I1225 12:39:17.926584 1463142 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1225 12:39:17.926590 1463142 command_runner.go:130] > # running containers
	I1225 12:39:17.926594 1463142 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1225 12:39:17.926601 1463142 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1225 12:39:17.926653 1463142 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1225 12:39:17.926663 1463142 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1225 12:39:17.926667 1463142 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1225 12:39:17.926672 1463142 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1225 12:39:17.926676 1463142 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1225 12:39:17.926686 1463142 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1225 12:39:17.926693 1463142 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1225 12:39:17.926697 1463142 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1225 12:39:17.926703 1463142 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1225 12:39:17.926711 1463142 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1225 12:39:17.926717 1463142 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1225 12:39:17.926730 1463142 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1225 12:39:17.926743 1463142 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1225 12:39:17.926750 1463142 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1225 12:39:17.926760 1463142 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1225 12:39:17.926775 1463142 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1225 12:39:17.926790 1463142 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1225 12:39:17.926804 1463142 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1225 12:39:17.926817 1463142 command_runner.go:130] > # Example:
	I1225 12:39:17.926831 1463142 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1225 12:39:17.926841 1463142 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1225 12:39:17.926853 1463142 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1225 12:39:17.926864 1463142 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1225 12:39:17.926873 1463142 command_runner.go:130] > # cpuset = 0
	I1225 12:39:17.926880 1463142 command_runner.go:130] > # cpushares = "0-1"
	I1225 12:39:17.926889 1463142 command_runner.go:130] > # Where:
	I1225 12:39:17.926896 1463142 command_runner.go:130] > # The workload name is workload-type.
	I1225 12:39:17.926911 1463142 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1225 12:39:17.926923 1463142 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1225 12:39:17.926932 1463142 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1225 12:39:17.926943 1463142 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1225 12:39:17.926951 1463142 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1225 12:39:17.926954 1463142 command_runner.go:130] > # 
	I1225 12:39:17.926961 1463142 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1225 12:39:17.926966 1463142 command_runner.go:130] > #
	I1225 12:39:17.926972 1463142 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1225 12:39:17.926978 1463142 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1225 12:39:17.926985 1463142 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1225 12:39:17.926993 1463142 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1225 12:39:17.926998 1463142 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1225 12:39:17.927004 1463142 command_runner.go:130] > [crio.image]
	I1225 12:39:17.927010 1463142 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1225 12:39:17.927015 1463142 command_runner.go:130] > # default_transport = "docker://"
	I1225 12:39:17.927021 1463142 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1225 12:39:17.927029 1463142 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1225 12:39:17.927034 1463142 command_runner.go:130] > # global_auth_file = ""
	I1225 12:39:17.927042 1463142 command_runner.go:130] > # The image used to instantiate infra containers.
	I1225 12:39:17.927047 1463142 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:39:17.927055 1463142 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1225 12:39:17.927064 1463142 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1225 12:39:17.927070 1463142 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1225 12:39:17.927075 1463142 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:39:17.927079 1463142 command_runner.go:130] > # pause_image_auth_file = ""
	I1225 12:39:17.927084 1463142 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1225 12:39:17.927090 1463142 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1225 12:39:17.927095 1463142 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1225 12:39:17.927101 1463142 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1225 12:39:17.927105 1463142 command_runner.go:130] > # pause_command = "/pause"
	I1225 12:39:17.927110 1463142 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1225 12:39:17.927116 1463142 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1225 12:39:17.927121 1463142 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1225 12:39:17.927127 1463142 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1225 12:39:17.927132 1463142 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1225 12:39:17.927135 1463142 command_runner.go:130] > # signature_policy = ""
	I1225 12:39:17.927141 1463142 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1225 12:39:17.927146 1463142 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1225 12:39:17.927152 1463142 command_runner.go:130] > # changing them here.
	I1225 12:39:17.927156 1463142 command_runner.go:130] > # insecure_registries = [
	I1225 12:39:17.927159 1463142 command_runner.go:130] > # ]
	I1225 12:39:17.927165 1463142 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1225 12:39:17.927173 1463142 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1225 12:39:17.927177 1463142 command_runner.go:130] > # image_volumes = "mkdir"
	I1225 12:39:17.927181 1463142 command_runner.go:130] > # Temporary directory to use for storing big files
	I1225 12:39:17.927188 1463142 command_runner.go:130] > # big_files_temporary_dir = ""
	I1225 12:39:17.927194 1463142 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1225 12:39:17.927198 1463142 command_runner.go:130] > # CNI plugins.
	I1225 12:39:17.927204 1463142 command_runner.go:130] > [crio.network]
	I1225 12:39:17.927210 1463142 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1225 12:39:17.927217 1463142 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1225 12:39:17.927221 1463142 command_runner.go:130] > # cni_default_network = ""
	I1225 12:39:17.927232 1463142 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1225 12:39:17.927236 1463142 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1225 12:39:17.927244 1463142 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1225 12:39:17.927248 1463142 command_runner.go:130] > # plugin_dirs = [
	I1225 12:39:17.927254 1463142 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1225 12:39:17.927262 1463142 command_runner.go:130] > # ]
	I1225 12:39:17.927272 1463142 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1225 12:39:17.927281 1463142 command_runner.go:130] > [crio.metrics]
	I1225 12:39:17.927289 1463142 command_runner.go:130] > # Globally enable or disable metrics support.
	I1225 12:39:17.927298 1463142 command_runner.go:130] > enable_metrics = true
	I1225 12:39:17.927306 1463142 command_runner.go:130] > # Specify enabled metrics collectors.
	I1225 12:39:17.927316 1463142 command_runner.go:130] > # Per default all metrics are enabled.
	I1225 12:39:17.927328 1463142 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1225 12:39:17.927342 1463142 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1225 12:39:17.927354 1463142 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1225 12:39:17.927363 1463142 command_runner.go:130] > # metrics_collectors = [
	I1225 12:39:17.927370 1463142 command_runner.go:130] > # 	"operations",
	I1225 12:39:17.927382 1463142 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1225 12:39:17.927392 1463142 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1225 12:39:17.927401 1463142 command_runner.go:130] > # 	"operations_errors",
	I1225 12:39:17.927410 1463142 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1225 12:39:17.927422 1463142 command_runner.go:130] > # 	"image_pulls_by_name",
	I1225 12:39:17.927437 1463142 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1225 12:39:17.927448 1463142 command_runner.go:130] > # 	"image_pulls_failures",
	I1225 12:39:17.927455 1463142 command_runner.go:130] > # 	"image_pulls_successes",
	I1225 12:39:17.927465 1463142 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1225 12:39:17.927472 1463142 command_runner.go:130] > # 	"image_layer_reuse",
	I1225 12:39:17.927482 1463142 command_runner.go:130] > # 	"containers_oom_total",
	I1225 12:39:17.927491 1463142 command_runner.go:130] > # 	"containers_oom",
	I1225 12:39:17.927498 1463142 command_runner.go:130] > # 	"processes_defunct",
	I1225 12:39:17.927505 1463142 command_runner.go:130] > # 	"operations_total",
	I1225 12:39:17.927513 1463142 command_runner.go:130] > # 	"operations_latency_seconds",
	I1225 12:39:17.927523 1463142 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1225 12:39:17.927533 1463142 command_runner.go:130] > # 	"operations_errors_total",
	I1225 12:39:17.927542 1463142 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1225 12:39:17.927550 1463142 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1225 12:39:17.927562 1463142 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1225 12:39:17.927571 1463142 command_runner.go:130] > # 	"image_pulls_success_total",
	I1225 12:39:17.927576 1463142 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1225 12:39:17.927581 1463142 command_runner.go:130] > # 	"containers_oom_count_total",
	I1225 12:39:17.927590 1463142 command_runner.go:130] > # ]
	I1225 12:39:17.927598 1463142 command_runner.go:130] > # The port on which the metrics server will listen.
	I1225 12:39:17.927602 1463142 command_runner.go:130] > # metrics_port = 9090
	I1225 12:39:17.927607 1463142 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1225 12:39:17.927617 1463142 command_runner.go:130] > # metrics_socket = ""
	I1225 12:39:17.927624 1463142 command_runner.go:130] > # The certificate for the secure metrics server.
	I1225 12:39:17.927635 1463142 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1225 12:39:17.927645 1463142 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1225 12:39:17.927653 1463142 command_runner.go:130] > # certificate on any modification event.
	I1225 12:39:17.927661 1463142 command_runner.go:130] > # metrics_cert = ""
	I1225 12:39:17.927672 1463142 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1225 12:39:17.927685 1463142 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1225 12:39:17.927695 1463142 command_runner.go:130] > # metrics_key = ""
	I1225 12:39:17.927703 1463142 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1225 12:39:17.927713 1463142 command_runner.go:130] > [crio.tracing]
	I1225 12:39:17.927722 1463142 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1225 12:39:17.927732 1463142 command_runner.go:130] > # enable_tracing = false
	I1225 12:39:17.927744 1463142 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1225 12:39:17.927759 1463142 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1225 12:39:17.927773 1463142 command_runner.go:130] > # Number of samples to collect per million spans.
	I1225 12:39:17.927785 1463142 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1225 12:39:17.927794 1463142 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1225 12:39:17.927803 1463142 command_runner.go:130] > [crio.stats]
	I1225 12:39:17.927813 1463142 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1225 12:39:17.927824 1463142 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1225 12:39:17.927833 1463142 command_runner.go:130] > # stats_collection_period = 0
	I1225 12:39:17.927882 1463142 command_runner.go:130] ! time="2023-12-25 12:39:17.906156401Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1225 12:39:17.927903 1463142 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1225 12:39:17.928011 1463142 cni.go:84] Creating CNI manager for ""
	I1225 12:39:17.928025 1463142 cni.go:136] 1 nodes found, recommending kindnet
	I1225 12:39:17.928050 1463142 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 12:39:17.928070 1463142 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.21 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-544936 NodeName:multinode-544936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 12:39:17.928201 1463142 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-544936"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 12:39:17.928281 1463142 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-544936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 12:39:17.928338 1463142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 12:39:17.938362 1463142 command_runner.go:130] > kubeadm
	I1225 12:39:17.938394 1463142 command_runner.go:130] > kubectl
	I1225 12:39:17.938398 1463142 command_runner.go:130] > kubelet
	I1225 12:39:17.938422 1463142 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 12:39:17.938501 1463142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 12:39:17.947869 1463142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1225 12:39:17.964101 1463142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 12:39:17.980221 1463142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1225 12:39:17.996657 1463142 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I1225 12:39:18.001086 1463142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 12:39:18.013396 1463142 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936 for IP: 192.168.39.21
	I1225 12:39:18.013436 1463142 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:39:18.013624 1463142 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 12:39:18.013683 1463142 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 12:39:18.013739 1463142 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key
	I1225 12:39:18.013771 1463142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt with IP's: []
	I1225 12:39:18.216786 1463142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt ...
	I1225 12:39:18.216826 1463142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt: {Name:mk1420525a660399c12989c468644764afef744e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:39:18.217031 1463142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key ...
	I1225 12:39:18.217050 1463142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key: {Name:mk65303d67800d4baea802d8c47c561b433030db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:39:18.217161 1463142 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.key.86be2464
	I1225 12:39:18.217183 1463142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.crt.86be2464 with IP's: [192.168.39.21 10.96.0.1 127.0.0.1 10.0.0.1]
	I1225 12:39:18.443815 1463142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.crt.86be2464 ...
	I1225 12:39:18.443869 1463142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.crt.86be2464: {Name:mk9a56eb2cfef8af3d45297b79071b7d1a3bd9e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:39:18.444077 1463142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.key.86be2464 ...
	I1225 12:39:18.444100 1463142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.key.86be2464: {Name:mk65fcafb5dfbc13663177fbbfc9a97d1261591e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:39:18.444226 1463142 certs.go:337] copying /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.crt.86be2464 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.crt
	I1225 12:39:18.444362 1463142 certs.go:341] copying /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.key.86be2464 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.key
	I1225 12:39:18.444453 1463142 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.key
	I1225 12:39:18.444473 1463142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.crt with IP's: []
	I1225 12:39:18.632391 1463142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.crt ...
	I1225 12:39:18.632427 1463142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.crt: {Name:mkfbaf0a93fe2975dbd8c90f55b6af862f41d336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:39:18.632630 1463142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.key ...
	I1225 12:39:18.632656 1463142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.key: {Name:mk9a7f496353ebc0977f5cda9313cd32222a0453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:39:18.632764 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1225 12:39:18.632786 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1225 12:39:18.632796 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1225 12:39:18.632809 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1225 12:39:18.632827 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1225 12:39:18.632855 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1225 12:39:18.632874 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1225 12:39:18.632890 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1225 12:39:18.632962 1463142 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 12:39:18.633012 1463142 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 12:39:18.633034 1463142 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 12:39:18.633078 1463142 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 12:39:18.633117 1463142 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 12:39:18.633155 1463142 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 12:39:18.633212 1463142 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 12:39:18.633262 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:39:18.633284 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem -> /usr/share/ca-certificates/1449797.pem
	I1225 12:39:18.633302 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> /usr/share/ca-certificates/14497972.pem
	I1225 12:39:18.634027 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 12:39:18.661615 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1225 12:39:18.685478 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 12:39:18.708939 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 12:39:18.733786 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 12:39:18.757525 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 12:39:18.782705 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 12:39:18.806564 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 12:39:18.830321 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 12:39:18.854043 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 12:39:18.878371 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 12:39:18.902700 1463142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 12:39:18.920177 1463142 ssh_runner.go:195] Run: openssl version
	I1225 12:39:18.926392 1463142 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1225 12:39:18.926500 1463142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 12:39:18.937891 1463142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 12:39:18.942849 1463142 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 12:39:18.942932 1463142 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 12:39:18.943018 1463142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 12:39:18.948637 1463142 command_runner.go:130] > 3ec20f2e
	I1225 12:39:18.949015 1463142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 12:39:18.960559 1463142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 12:39:18.971946 1463142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:39:18.976784 1463142 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:39:18.977129 1463142 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:39:18.977201 1463142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:39:18.982656 1463142 command_runner.go:130] > b5213941
	I1225 12:39:18.982960 1463142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 12:39:18.994192 1463142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 12:39:19.005566 1463142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 12:39:19.010518 1463142 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 12:39:19.010745 1463142 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 12:39:19.010897 1463142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 12:39:19.016581 1463142 command_runner.go:130] > 51391683
	I1225 12:39:19.016672 1463142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 12:39:19.027800 1463142 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 12:39:19.032138 1463142 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1225 12:39:19.032361 1463142 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1225 12:39:19.032425 1463142 kubeadm.go:404] StartCluster: {Name:multinode-544936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:39:19.032505 1463142 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 12:39:19.032592 1463142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 12:39:19.079106 1463142 cri.go:89] found id: ""
	I1225 12:39:19.079206 1463142 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 12:39:19.089576 1463142 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1225 12:39:19.089604 1463142 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1225 12:39:19.089621 1463142 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1225 12:39:19.089796 1463142 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 12:39:19.100341 1463142 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 12:39:19.110238 1463142 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1225 12:39:19.110275 1463142 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1225 12:39:19.110303 1463142 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1225 12:39:19.110316 1463142 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 12:39:19.110503 1463142 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 12:39:19.110547 1463142 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1225 12:39:19.475844 1463142 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 12:39:19.475873 1463142 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 12:39:31.267573 1463142 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1225 12:39:31.267619 1463142 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1225 12:39:31.267663 1463142 kubeadm.go:322] [preflight] Running pre-flight checks
	I1225 12:39:31.267685 1463142 command_runner.go:130] > [preflight] Running pre-flight checks
	I1225 12:39:31.267805 1463142 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 12:39:31.267818 1463142 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 12:39:31.267916 1463142 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 12:39:31.267943 1463142 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 12:39:31.268072 1463142 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1225 12:39:31.268084 1463142 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1225 12:39:31.268170 1463142 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 12:39:31.269687 1463142 out.go:204]   - Generating certificates and keys ...
	I1225 12:39:31.268205 1463142 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 12:39:31.269775 1463142 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1225 12:39:31.269788 1463142 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1225 12:39:31.269859 1463142 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1225 12:39:31.269869 1463142 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1225 12:39:31.269951 1463142 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1225 12:39:31.269961 1463142 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1225 12:39:31.270034 1463142 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1225 12:39:31.270043 1463142 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1225 12:39:31.270123 1463142 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1225 12:39:31.270132 1463142 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1225 12:39:31.270206 1463142 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1225 12:39:31.270215 1463142 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1225 12:39:31.270291 1463142 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1225 12:39:31.270302 1463142 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1225 12:39:31.270457 1463142 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-544936] and IPs [192.168.39.21 127.0.0.1 ::1]
	I1225 12:39:31.270469 1463142 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-544936] and IPs [192.168.39.21 127.0.0.1 ::1]
	I1225 12:39:31.270540 1463142 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1225 12:39:31.270550 1463142 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1225 12:39:31.270692 1463142 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-544936] and IPs [192.168.39.21 127.0.0.1 ::1]
	I1225 12:39:31.270701 1463142 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-544936] and IPs [192.168.39.21 127.0.0.1 ::1]
	I1225 12:39:31.270818 1463142 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1225 12:39:31.270833 1463142 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1225 12:39:31.270921 1463142 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1225 12:39:31.270931 1463142 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1225 12:39:31.270995 1463142 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1225 12:39:31.271005 1463142 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1225 12:39:31.271079 1463142 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 12:39:31.271088 1463142 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 12:39:31.271155 1463142 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 12:39:31.271166 1463142 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 12:39:31.271238 1463142 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 12:39:31.271252 1463142 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 12:39:31.271335 1463142 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 12:39:31.271344 1463142 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 12:39:31.271437 1463142 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 12:39:31.271463 1463142 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 12:39:31.271612 1463142 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 12:39:31.271634 1463142 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 12:39:31.271706 1463142 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 12:39:31.271722 1463142 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 12:39:31.273865 1463142 out.go:204]   - Booting up control plane ...
	I1225 12:39:31.273961 1463142 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 12:39:31.273971 1463142 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 12:39:31.274059 1463142 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 12:39:31.274072 1463142 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 12:39:31.274155 1463142 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 12:39:31.274164 1463142 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 12:39:31.274277 1463142 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 12:39:31.274285 1463142 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 12:39:31.274384 1463142 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 12:39:31.274391 1463142 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 12:39:31.274446 1463142 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1225 12:39:31.274453 1463142 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1225 12:39:31.274654 1463142 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1225 12:39:31.274673 1463142 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1225 12:39:31.274768 1463142 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.504392 seconds
	I1225 12:39:31.274777 1463142 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504392 seconds
	I1225 12:39:31.274931 1463142 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 12:39:31.274952 1463142 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 12:39:31.275105 1463142 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 12:39:31.275113 1463142 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 12:39:31.275182 1463142 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1225 12:39:31.275190 1463142 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 12:39:31.275388 1463142 command_runner.go:130] > [mark-control-plane] Marking the node multinode-544936 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 12:39:31.275397 1463142 kubeadm.go:322] [mark-control-plane] Marking the node multinode-544936 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 12:39:31.275441 1463142 command_runner.go:130] > [bootstrap-token] Using token: qnyjjy.jtziixmcp1szre1o
	I1225 12:39:31.275447 1463142 kubeadm.go:322] [bootstrap-token] Using token: qnyjjy.jtziixmcp1szre1o
	I1225 12:39:31.277052 1463142 out.go:204]   - Configuring RBAC rules ...
	I1225 12:39:31.277172 1463142 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 12:39:31.277197 1463142 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 12:39:31.277302 1463142 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 12:39:31.277311 1463142 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 12:39:31.277461 1463142 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 12:39:31.277470 1463142 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 12:39:31.277598 1463142 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 12:39:31.277607 1463142 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 12:39:31.277718 1463142 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 12:39:31.277725 1463142 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 12:39:31.277793 1463142 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 12:39:31.277799 1463142 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 12:39:31.277893 1463142 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 12:39:31.277901 1463142 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 12:39:31.277947 1463142 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1225 12:39:31.277953 1463142 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1225 12:39:31.277989 1463142 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1225 12:39:31.277995 1463142 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1225 12:39:31.277998 1463142 kubeadm.go:322] 
	I1225 12:39:31.278050 1463142 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1225 12:39:31.278055 1463142 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1225 12:39:31.278059 1463142 kubeadm.go:322] 
	I1225 12:39:31.278119 1463142 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1225 12:39:31.278124 1463142 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1225 12:39:31.278127 1463142 kubeadm.go:322] 
	I1225 12:39:31.278148 1463142 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1225 12:39:31.278155 1463142 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1225 12:39:31.278202 1463142 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 12:39:31.278207 1463142 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 12:39:31.278251 1463142 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 12:39:31.278257 1463142 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 12:39:31.278260 1463142 kubeadm.go:322] 
	I1225 12:39:31.278307 1463142 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1225 12:39:31.278312 1463142 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1225 12:39:31.278316 1463142 kubeadm.go:322] 
	I1225 12:39:31.278364 1463142 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 12:39:31.278369 1463142 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 12:39:31.278373 1463142 kubeadm.go:322] 
	I1225 12:39:31.278414 1463142 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1225 12:39:31.278419 1463142 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1225 12:39:31.278523 1463142 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 12:39:31.278535 1463142 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 12:39:31.278650 1463142 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 12:39:31.278669 1463142 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 12:39:31.278677 1463142 kubeadm.go:322] 
	I1225 12:39:31.278799 1463142 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1225 12:39:31.278809 1463142 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 12:39:31.278889 1463142 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1225 12:39:31.278897 1463142 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1225 12:39:31.278901 1463142 kubeadm.go:322] 
	I1225 12:39:31.278974 1463142 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token qnyjjy.jtziixmcp1szre1o \
	I1225 12:39:31.278984 1463142 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qnyjjy.jtziixmcp1szre1o \
	I1225 12:39:31.279066 1463142 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 \
	I1225 12:39:31.279072 1463142 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 \
	I1225 12:39:31.279088 1463142 command_runner.go:130] > 	--control-plane 
	I1225 12:39:31.279094 1463142 kubeadm.go:322] 	--control-plane 
	I1225 12:39:31.279101 1463142 kubeadm.go:322] 
	I1225 12:39:31.279168 1463142 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1225 12:39:31.279175 1463142 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1225 12:39:31.279178 1463142 kubeadm.go:322] 
	I1225 12:39:31.279245 1463142 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token qnyjjy.jtziixmcp1szre1o \
	I1225 12:39:31.279258 1463142 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qnyjjy.jtziixmcp1szre1o \
	I1225 12:39:31.279378 1463142 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 12:39:31.279405 1463142 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 12:39:31.279420 1463142 cni.go:84] Creating CNI manager for ""
	I1225 12:39:31.279432 1463142 cni.go:136] 1 nodes found, recommending kindnet
	I1225 12:39:31.280999 1463142 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1225 12:39:31.282207 1463142 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1225 12:39:31.308342 1463142 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1225 12:39:31.308392 1463142 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1225 12:39:31.308414 1463142 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1225 12:39:31.308423 1463142 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1225 12:39:31.308439 1463142 command_runner.go:130] > Access: 2023-12-25 12:39:00.887097634 +0000
	I1225 12:39:31.308452 1463142 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I1225 12:39:31.308465 1463142 command_runner.go:130] > Change: 2023-12-25 12:38:59.067097634 +0000
	I1225 12:39:31.308473 1463142 command_runner.go:130] >  Birth: -
	I1225 12:39:31.309265 1463142 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1225 12:39:31.309281 1463142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1225 12:39:31.359182 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1225 12:39:32.295560 1463142 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1225 12:39:32.301251 1463142 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1225 12:39:32.308946 1463142 command_runner.go:130] > serviceaccount/kindnet created
	I1225 12:39:32.322119 1463142 command_runner.go:130] > daemonset.apps/kindnet created
	I1225 12:39:32.324715 1463142 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 12:39:32.324805 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da minikube.k8s.io/name=multinode-544936 minikube.k8s.io/updated_at=2023_12_25T12_39_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:32.324818 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:32.344415 1463142 command_runner.go:130] > -16
	I1225 12:39:32.344752 1463142 ops.go:34] apiserver oom_adj: -16
	I1225 12:39:32.546215 1463142 command_runner.go:130] > node/multinode-544936 labeled
	I1225 12:39:32.548042 1463142 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1225 12:39:32.548200 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:32.637961 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:33.049117 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:33.130494 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:33.549201 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:33.634776 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:34.048412 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:34.137988 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:34.548639 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:34.640045 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:35.048910 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:35.144123 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:35.548398 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:35.641063 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:36.048433 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:36.145065 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:36.548694 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:36.634240 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:37.048561 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:37.130537 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:37.548614 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:37.634083 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:38.048402 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:38.136087 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:38.548256 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:38.630634 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:39.048861 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:39.140435 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:39.549112 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:39.636358 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:40.049297 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:40.135216 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:40.548633 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:40.640359 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:41.048307 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:41.150399 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:41.549295 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:41.643797 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:42.048908 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:42.155758 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:42.548765 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:42.651843 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:43.049075 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:43.160183 1463142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1225 12:39:43.548868 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:39:43.661723 1463142 command_runner.go:130] > NAME      SECRETS   AGE
	I1225 12:39:43.661757 1463142 command_runner.go:130] > default   0         0s
	I1225 12:39:43.661786 1463142 kubeadm.go:1088] duration metric: took 11.337060203s to wait for elevateKubeSystemPrivileges.
	I1225 12:39:43.661807 1463142 kubeadm.go:406] StartCluster complete in 24.629387935s
	I1225 12:39:43.661830 1463142 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:39:43.661925 1463142 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:39:43.662917 1463142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:39:43.663215 1463142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 12:39:43.663389 1463142 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 12:39:43.663968 1463142 addons.go:69] Setting storage-provisioner=true in profile "multinode-544936"
	I1225 12:39:43.663986 1463142 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:39:43.664074 1463142 addons.go:69] Setting default-storageclass=true in profile "multinode-544936"
	I1225 12:39:43.664100 1463142 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-544936"
	I1225 12:39:43.664237 1463142 addons.go:237] Setting addon storage-provisioner=true in "multinode-544936"
	I1225 12:39:43.664347 1463142 host.go:66] Checking if "multinode-544936" exists ...
	I1225 12:39:43.664675 1463142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:39:43.664711 1463142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:39:43.664867 1463142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:39:43.664917 1463142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:39:43.666565 1463142 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:39:43.666933 1463142 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:39:43.667817 1463142 cert_rotation.go:137] Starting client certificate rotation controller
	I1225 12:39:43.668321 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1225 12:39:43.668344 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:43.668356 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:43.668366 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:43.681514 1463142 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1225 12:39:43.681546 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:43.681558 1463142 round_trippers.go:580]     Audit-Id: 2eda1dc5-9a8d-486c-9e4f-b8c15fde1d72
	I1225 12:39:43.681567 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:43.681574 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:43.681590 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:43.681602 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:43.681611 1463142 round_trippers.go:580]     Content-Length: 291
	I1225 12:39:43.681622 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:43 GMT
	I1225 12:39:43.681660 1463142 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1deabb96-9bfd-47c0-8cbc-978c4199f86b","resourceVersion":"263","creationTimestamp":"2023-12-25T12:39:31Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1225 12:39:43.682145 1463142 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1deabb96-9bfd-47c0-8cbc-978c4199f86b","resourceVersion":"263","creationTimestamp":"2023-12-25T12:39:31Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1225 12:39:43.682240 1463142 round_trippers.go:463] PUT https://192.168.39.21:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1225 12:39:43.682262 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:43.682273 1463142 round_trippers.go:473]     Content-Type: application/json
	I1225 12:39:43.682286 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:43.682295 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:43.682728 1463142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36329
	I1225 12:39:43.682783 1463142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1225 12:39:43.683244 1463142 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:39:43.683273 1463142 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:39:43.683777 1463142 main.go:141] libmachine: Using API Version  1
	I1225 12:39:43.683796 1463142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:39:43.683795 1463142 main.go:141] libmachine: Using API Version  1
	I1225 12:39:43.683813 1463142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:39:43.684168 1463142 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:39:43.684221 1463142 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:39:43.684335 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetState
	I1225 12:39:43.684811 1463142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:39:43.684887 1463142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:39:43.686839 1463142 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:39:43.687209 1463142 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:39:43.687566 1463142 addons.go:237] Setting addon default-storageclass=true in "multinode-544936"
	I1225 12:39:43.687611 1463142 host.go:66] Checking if "multinode-544936" exists ...
	I1225 12:39:43.688051 1463142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:39:43.688101 1463142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:39:43.694084 1463142 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1225 12:39:43.694116 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:43.694127 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:43.694136 1463142 round_trippers.go:580]     Content-Length: 291
	I1225 12:39:43.694144 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:43 GMT
	I1225 12:39:43.694152 1463142 round_trippers.go:580]     Audit-Id: 548d4f42-3eab-4a3b-971c-035d54d4dce5
	I1225 12:39:43.694159 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:43.694166 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:43.694199 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:43.694247 1463142 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1deabb96-9bfd-47c0-8cbc-978c4199f86b","resourceVersion":"349","creationTimestamp":"2023-12-25T12:39:31Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1225 12:39:43.701678 1463142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I1225 12:39:43.702179 1463142 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:39:43.702758 1463142 main.go:141] libmachine: Using API Version  1
	I1225 12:39:43.702790 1463142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:39:43.703191 1463142 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:39:43.703425 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetState
	I1225 12:39:43.704563 1463142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
	I1225 12:39:43.705004 1463142 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:39:43.705435 1463142 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:39:43.705530 1463142 main.go:141] libmachine: Using API Version  1
	I1225 12:39:43.705555 1463142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:39:43.707645 1463142 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 12:39:43.705886 1463142 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:39:43.709167 1463142 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 12:39:43.709190 1463142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 12:39:43.709223 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:43.709709 1463142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:39:43.709772 1463142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:39:43.712448 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:43.712894 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:43.712918 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:43.713124 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:39:43.713336 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:43.713471 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:39:43.713611 1463142 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:39:43.725877 1463142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I1225 12:39:43.726419 1463142 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:39:43.727029 1463142 main.go:141] libmachine: Using API Version  1
	I1225 12:39:43.727059 1463142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:39:43.727431 1463142 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:39:43.727639 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetState
	I1225 12:39:43.729123 1463142 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:39:43.729461 1463142 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 12:39:43.729483 1463142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 12:39:43.729504 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:39:43.732037 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:43.732496 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:39:43.732529 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:39:43.732672 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:39:43.732871 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:39:43.733042 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:39:43.733187 1463142 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:39:43.842643 1463142 command_runner.go:130] > apiVersion: v1
	I1225 12:39:43.842667 1463142 command_runner.go:130] > data:
	I1225 12:39:43.842671 1463142 command_runner.go:130] >   Corefile: |
	I1225 12:39:43.842683 1463142 command_runner.go:130] >     .:53 {
	I1225 12:39:43.842687 1463142 command_runner.go:130] >         errors
	I1225 12:39:43.842696 1463142 command_runner.go:130] >         health {
	I1225 12:39:43.842701 1463142 command_runner.go:130] >            lameduck 5s
	I1225 12:39:43.842705 1463142 command_runner.go:130] >         }
	I1225 12:39:43.842708 1463142 command_runner.go:130] >         ready
	I1225 12:39:43.842731 1463142 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1225 12:39:43.842739 1463142 command_runner.go:130] >            pods insecure
	I1225 12:39:43.842744 1463142 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1225 12:39:43.842751 1463142 command_runner.go:130] >            ttl 30
	I1225 12:39:43.842754 1463142 command_runner.go:130] >         }
	I1225 12:39:43.842762 1463142 command_runner.go:130] >         prometheus :9153
	I1225 12:39:43.842766 1463142 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1225 12:39:43.842773 1463142 command_runner.go:130] >            max_concurrent 1000
	I1225 12:39:43.842777 1463142 command_runner.go:130] >         }
	I1225 12:39:43.842781 1463142 command_runner.go:130] >         cache 30
	I1225 12:39:43.842787 1463142 command_runner.go:130] >         loop
	I1225 12:39:43.842792 1463142 command_runner.go:130] >         reload
	I1225 12:39:43.842797 1463142 command_runner.go:130] >         loadbalance
	I1225 12:39:43.842801 1463142 command_runner.go:130] >     }
	I1225 12:39:43.842805 1463142 command_runner.go:130] > kind: ConfigMap
	I1225 12:39:43.842810 1463142 command_runner.go:130] > metadata:
	I1225 12:39:43.842819 1463142 command_runner.go:130] >   creationTimestamp: "2023-12-25T12:39:31Z"
	I1225 12:39:43.842828 1463142 command_runner.go:130] >   name: coredns
	I1225 12:39:43.842835 1463142 command_runner.go:130] >   namespace: kube-system
	I1225 12:39:43.842846 1463142 command_runner.go:130] >   resourceVersion: "259"
	I1225 12:39:43.842858 1463142 command_runner.go:130] >   uid: 1c94dbaf-9e87-4c5a-a00d-da7d7c13d59d
	I1225 12:39:43.843106 1463142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 12:39:43.915687 1463142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 12:39:43.915869 1463142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 12:39:44.168837 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1225 12:39:44.168871 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:44.168884 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:44.168893 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:44.222939 1463142 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I1225 12:39:44.222974 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:44.222984 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:44.222993 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:44.223000 1463142 round_trippers.go:580]     Content-Length: 291
	I1225 12:39:44.223007 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:44 GMT
	I1225 12:39:44.223015 1463142 round_trippers.go:580]     Audit-Id: c67909bf-8e63-4838-bc99-2f1cb21141d3
	I1225 12:39:44.223022 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:44.223029 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:44.227155 1463142 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1deabb96-9bfd-47c0-8cbc-978c4199f86b","resourceVersion":"369","creationTimestamp":"2023-12-25T12:39:31Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1225 12:39:44.227529 1463142 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-544936" context rescaled to 1 replicas
	I1225 12:39:44.227615 1463142 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 12:39:44.229528 1463142 out.go:177] * Verifying Kubernetes components...
	I1225 12:39:44.230938 1463142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:39:44.627836 1463142 command_runner.go:130] > configmap/coredns replaced
	I1225 12:39:44.634289 1463142 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1225 12:39:44.748436 1463142 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1225 12:39:44.755991 1463142 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1225 12:39:44.770495 1463142 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1225 12:39:44.781205 1463142 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1225 12:39:44.789233 1463142 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1225 12:39:44.804419 1463142 command_runner.go:130] > pod/storage-provisioner created
	I1225 12:39:44.807241 1463142 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1225 12:39:44.807345 1463142 main.go:141] libmachine: Making call to close driver server
	I1225 12:39:44.807364 1463142 main.go:141] libmachine: (multinode-544936) Calling .Close
	I1225 12:39:44.807404 1463142 main.go:141] libmachine: Making call to close driver server
	I1225 12:39:44.807428 1463142 main.go:141] libmachine: (multinode-544936) Calling .Close
	I1225 12:39:44.807705 1463142 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:39:44.807725 1463142 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:39:44.807730 1463142 main.go:141] libmachine: (multinode-544936) DBG | Closing plugin on server side
	I1225 12:39:44.807734 1463142 main.go:141] libmachine: Making call to close driver server
	I1225 12:39:44.807749 1463142 main.go:141] libmachine: (multinode-544936) Calling .Close
	I1225 12:39:44.807840 1463142 main.go:141] libmachine: (multinode-544936) DBG | Closing plugin on server side
	I1225 12:39:44.807878 1463142 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:39:44.807883 1463142 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:39:44.808033 1463142 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:39:44.808078 1463142 main.go:141] libmachine: Making call to close driver server
	I1225 12:39:44.808104 1463142 main.go:141] libmachine: (multinode-544936) Calling .Close
	I1225 12:39:44.808049 1463142 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:39:44.808140 1463142 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:39:44.808115 1463142 main.go:141] libmachine: (multinode-544936) DBG | Closing plugin on server side
	I1225 12:39:44.808250 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/apis/storage.k8s.io/v1/storageclasses
	I1225 12:39:44.808262 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:44.808273 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:44.808285 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:44.808426 1463142 main.go:141] libmachine: (multinode-544936) DBG | Closing plugin on server side
	I1225 12:39:44.808300 1463142 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:39:44.808472 1463142 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:39:44.808579 1463142 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:39:44.808761 1463142 node_ready.go:35] waiting up to 6m0s for node "multinode-544936" to be "Ready" ...
	I1225 12:39:44.808862 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:44.808873 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:44.808884 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:44.808895 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:44.812852 1463142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:39:44.812871 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:44.812881 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:44.812891 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:44.812899 1463142 round_trippers.go:580]     Content-Length: 1273
	I1225 12:39:44.812906 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:44 GMT
	I1225 12:39:44.812914 1463142 round_trippers.go:580]     Audit-Id: 6d3f1ed7-37e5-4c54-bb48-aa67cb50b2ba
	I1225 12:39:44.812923 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:44.812936 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:44.812968 1463142 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"397"},"items":[{"metadata":{"name":"standard","uid":"635d010e-faca-419b-bb39-e4491fddb4d2","resourceVersion":"390","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1225 12:39:44.813439 1463142 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"635d010e-faca-419b-bb39-e4491fddb4d2","resourceVersion":"390","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1225 12:39:44.813503 1463142 round_trippers.go:463] PUT https://192.168.39.21:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1225 12:39:44.813515 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:44.813527 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:44.813540 1463142 round_trippers.go:473]     Content-Type: application/json
	I1225 12:39:44.813553 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:44.814918 1463142 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1225 12:39:44.814939 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:44.814948 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:44.814955 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:44.814962 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:44.814974 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:44 GMT
	I1225 12:39:44.814981 1463142 round_trippers.go:580]     Audit-Id: 0b051afd-a372-4dae-a988-91a9fbf20d49
	I1225 12:39:44.814989 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:44.815747 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"348","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1225 12:39:44.819338 1463142 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1225 12:39:44.819359 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:44.819366 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:44.819372 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:44.819379 1463142 round_trippers.go:580]     Content-Length: 1220
	I1225 12:39:44.819383 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:44 GMT
	I1225 12:39:44.819389 1463142 round_trippers.go:580]     Audit-Id: 16b2a39d-0d90-44ef-9825-02f77253b05d
	I1225 12:39:44.819393 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:44.819401 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:44.819432 1463142 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"635d010e-faca-419b-bb39-e4491fddb4d2","resourceVersion":"390","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1225 12:39:44.819585 1463142 main.go:141] libmachine: Making call to close driver server
	I1225 12:39:44.819605 1463142 main.go:141] libmachine: (multinode-544936) Calling .Close
	I1225 12:39:44.819893 1463142 main.go:141] libmachine: Successfully made call to close driver server
	I1225 12:39:44.819908 1463142 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 12:39:44.819931 1463142 main.go:141] libmachine: (multinode-544936) DBG | Closing plugin on server side
	I1225 12:39:44.821652 1463142 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1225 12:39:44.823012 1463142 addons.go:508] enable addons completed in 1.159626157s: enabled=[storage-provisioner default-storageclass]
	I1225 12:39:45.309043 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:45.309071 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:45.309080 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:45.309086 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:45.312547 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:39:45.312571 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:45.312578 1463142 round_trippers.go:580]     Audit-Id: 48b6091f-c43d-47c3-9d80-add790f0c23e
	I1225 12:39:45.312584 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:45.312589 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:45.312594 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:45.312599 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:45.312604 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:45 GMT
	I1225 12:39:45.312850 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"348","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1225 12:39:45.809103 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:45.809138 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:45.809147 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:45.809153 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:45.811901 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:45.811920 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:45.811926 1463142 round_trippers.go:580]     Audit-Id: 570e4387-c68d-4454-9280-a874d7a7759b
	I1225 12:39:45.811932 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:45.811938 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:45.811949 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:45.811958 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:45.811967 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:45 GMT
	I1225 12:39:45.812338 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"348","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1225 12:39:46.309018 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:46.309048 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:46.309057 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:46.309066 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:46.311960 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:46.311989 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:46.311997 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:46 GMT
	I1225 12:39:46.312003 1463142 round_trippers.go:580]     Audit-Id: 1e84bded-812b-47b3-951c-f906653dd7c6
	I1225 12:39:46.312008 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:46.312013 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:46.312018 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:46.312023 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:46.312214 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"348","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1225 12:39:46.808937 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:46.808964 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:46.808973 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:46.808979 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:46.811683 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:46.811710 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:46.811717 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:46.811723 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:46 GMT
	I1225 12:39:46.811728 1463142 round_trippers.go:580]     Audit-Id: c7ce4ca4-a4c3-46e6-b43b-a68cc3b74ee1
	I1225 12:39:46.811733 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:46.811738 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:46.811746 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:46.812239 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"348","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1225 12:39:46.812653 1463142 node_ready.go:58] node "multinode-544936" has status "Ready":"False"
	I1225 12:39:47.308954 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:47.308979 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:47.308995 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:47.309002 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:47.313365 1463142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:39:47.313395 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:47.313406 1463142 round_trippers.go:580]     Audit-Id: 0e116006-7560-416e-8818-340bde0de4a2
	I1225 12:39:47.313417 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:47.313431 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:47.313440 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:47.313450 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:47.313459 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:47 GMT
	I1225 12:39:47.313673 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"348","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1225 12:39:47.809624 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:47.809648 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:47.809657 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:47.809663 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:47.813328 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:39:47.813359 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:47.813367 1463142 round_trippers.go:580]     Audit-Id: dcf82ae5-ddbe-4cfb-a453-8290b4df8c24
	I1225 12:39:47.813374 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:47.813379 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:47.813385 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:47.813391 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:47.813396 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:47 GMT
	I1225 12:39:47.813531 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"348","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1225 12:39:48.309037 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:48.309071 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:48.309080 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:48.309086 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:48.312193 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:39:48.312218 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:48.312225 1463142 round_trippers.go:580]     Audit-Id: 2e14c424-a3ad-4278-bb58-d3cc22e51efb
	I1225 12:39:48.312231 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:48.312236 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:48.312241 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:48.312247 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:48.312252 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:48 GMT
	I1225 12:39:48.312949 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"348","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1225 12:39:48.809212 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:48.809247 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:48.809257 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:48.809264 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:48.812072 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:48.812104 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:48.812112 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:48.812118 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:48.812122 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:48.812128 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:48 GMT
	I1225 12:39:48.812133 1463142 round_trippers.go:580]     Audit-Id: 6c041d2a-1bdd-4931-b1e2-7dfa5b8d023a
	I1225 12:39:48.812138 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:48.812331 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"348","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1225 12:39:48.812777 1463142 node_ready.go:58] node "multinode-544936" has status "Ready":"False"
	I1225 12:39:49.310080 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:49.310110 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:49.310121 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:49.310129 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:49.313093 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:49.313117 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:49.313125 1463142 round_trippers.go:580]     Audit-Id: ce737af1-b144-4822-b7af-e91b0c70396a
	I1225 12:39:49.313131 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:49.313136 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:49.313141 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:49.313146 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:49.313158 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:49 GMT
	I1225 12:39:49.313584 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"348","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1225 12:39:49.809291 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:49.809323 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:49.809337 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:49.809345 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:49.812912 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:39:49.812941 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:49.812952 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:49.812960 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:49.812967 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:49.812979 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:49.812987 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:49 GMT
	I1225 12:39:49.812994 1463142 round_trippers.go:580]     Audit-Id: 1f9097ff-6915-4abd-b442-265bd9652d5f
	I1225 12:39:49.813153 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"348","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1225 12:39:50.309868 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:50.309902 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:50.309912 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:50.309919 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:50.313978 1463142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:39:50.314007 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:50.314018 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:50.314026 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:50.314034 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:50 GMT
	I1225 12:39:50.314041 1463142 round_trippers.go:580]     Audit-Id: a5d2f2ca-f947-406c-949a-967957f73838
	I1225 12:39:50.314049 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:50.314061 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:50.314191 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:39:50.314544 1463142 node_ready.go:49] node "multinode-544936" has status "Ready":"True"
	I1225 12:39:50.314565 1463142 node_ready.go:38] duration metric: took 5.505781957s waiting for node "multinode-544936" to be "Ready" ...
	I1225 12:39:50.314579 1463142 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:39:50.314696 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:39:50.314707 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:50.314718 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:50.314727 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:50.318160 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:39:50.318176 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:50.318185 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:50.318193 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:50.318199 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:50.318207 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:50.318214 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:50 GMT
	I1225 12:39:50.318223 1463142 round_trippers.go:580]     Audit-Id: ba8e8737-0441-4c6d-8c9a-38608c7bc7e0
	I1225 12:39:50.319126 1463142 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"421","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53878 chars]
	I1225 12:39:50.323552 1463142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace to be "Ready" ...
	I1225 12:39:50.323652 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:39:50.323662 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:50.323670 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:50.323683 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:50.327832 1463142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:39:50.327852 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:50.327859 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:50.327867 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:50.327875 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:50 GMT
	I1225 12:39:50.327882 1463142 round_trippers.go:580]     Audit-Id: 915366f9-44f2-42c3-9b35-b3d9c48112b6
	I1225 12:39:50.327890 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:50.327898 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:50.328011 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"421","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1225 12:39:50.328538 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:50.328557 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:50.328568 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:50.328579 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:50.332602 1463142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:39:50.332628 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:50.332638 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:50 GMT
	I1225 12:39:50.332647 1463142 round_trippers.go:580]     Audit-Id: 8253068f-d22c-4367-b520-ab35dd2904d5
	I1225 12:39:50.332655 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:50.332664 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:50.332671 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:50.332678 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:50.333153 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:39:50.824152 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:39:50.824184 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:50.824194 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:50.824204 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:50.827029 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:50.827058 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:50.827069 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:50.827075 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:50 GMT
	I1225 12:39:50.827080 1463142 round_trippers.go:580]     Audit-Id: 5501b09d-0f88-4033-a769-1d7ba1dc362c
	I1225 12:39:50.827085 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:50.827090 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:50.827097 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:50.827216 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"421","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1225 12:39:50.827796 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:50.827816 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:50.827827 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:50.827837 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:50.845393 1463142 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1225 12:39:50.845420 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:50.845431 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:50.845440 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:50.845446 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:50.845454 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:50.845460 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:50 GMT
	I1225 12:39:50.845467 1463142 round_trippers.go:580]     Audit-Id: 8c0ea604-9665-493c-b2e3-230921969c90
	I1225 12:39:50.846992 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:39:51.324770 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:39:51.324800 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:51.324811 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:51.324819 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:51.327862 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:39:51.327885 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:51.327893 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:51 GMT
	I1225 12:39:51.327901 1463142 round_trippers.go:580]     Audit-Id: 25dc9069-ca66-49f8-a587-b33d2d7639a1
	I1225 12:39:51.327910 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:51.327917 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:51.327927 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:51.327936 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:51.328076 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"421","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1225 12:39:51.328567 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:51.328585 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:51.328596 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:51.328605 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:51.331068 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:51.331090 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:51.331096 1463142 round_trippers.go:580]     Audit-Id: 826e02c1-fe51-4829-8de0-e845f8eb26e6
	I1225 12:39:51.331102 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:51.331107 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:51.331112 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:51.331117 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:51.331122 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:51 GMT
	I1225 12:39:51.331328 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:39:51.823959 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:39:51.824003 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:51.824016 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:51.824026 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:51.826911 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:51.826932 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:51.826939 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:51.826944 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:51.826949 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:51.826955 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:51 GMT
	I1225 12:39:51.826960 1463142 round_trippers.go:580]     Audit-Id: d5bb950f-a8a1-4d88-bdf6-0e2626ddabc7
	I1225 12:39:51.826965 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:51.827115 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"433","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1225 12:39:51.827643 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:51.827661 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:51.827668 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:51.827674 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:51.829913 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:51.829934 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:51.829943 1463142 round_trippers.go:580]     Audit-Id: 33d18168-6d67-4606-85a7-ec898cdec7a0
	I1225 12:39:51.829951 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:51.829959 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:51.829973 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:51.829985 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:51.829997 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:51 GMT
	I1225 12:39:51.830189 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:39:51.830585 1463142 pod_ready.go:92] pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace has status "Ready":"True"
	I1225 12:39:51.830607 1463142 pod_ready.go:81] duration metric: took 1.507025564s waiting for pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace to be "Ready" ...
	I1225 12:39:51.830617 1463142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:39:51.830684 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:39:51.830692 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:51.830699 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:51.830705 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:51.832995 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:51.833016 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:51.833025 1463142 round_trippers.go:580]     Audit-Id: 473ef8cd-fb68-40b8-9325-3752210025e5
	I1225 12:39:51.833040 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:51.833050 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:51.833061 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:51.833073 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:51.833082 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:51 GMT
	I1225 12:39:51.833213 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"382","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1225 12:39:51.833753 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:51.833769 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:51.833777 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:51.833786 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:51.835786 1463142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1225 12:39:51.835805 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:51.835813 1463142 round_trippers.go:580]     Audit-Id: 7de60ee2-0cd8-4b39-ba81-4fbb44d26e1d
	I1225 12:39:51.835821 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:51.835829 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:51.835841 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:51.835853 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:51.835865 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:51 GMT
	I1225 12:39:51.836037 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:39:51.836453 1463142 pod_ready.go:92] pod "etcd-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:39:51.836475 1463142 pod_ready.go:81] duration metric: took 5.851908ms waiting for pod "etcd-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:39:51.836487 1463142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:39:51.836579 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-544936
	I1225 12:39:51.836589 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:51.836596 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:51.836603 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:51.838500 1463142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1225 12:39:51.838520 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:51.838529 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:51.838538 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:51 GMT
	I1225 12:39:51.838551 1463142 round_trippers.go:580]     Audit-Id: 7ff3ad16-3fca-4ff9-8a65-c6d9b05926d2
	I1225 12:39:51.838563 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:51.838576 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:51.838588 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:51.838946 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-544936","namespace":"kube-system","uid":"d0fda9c8-27cf-4ecc-b379-39745cb7ec19","resourceVersion":"300","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.21:8443","kubernetes.io/config.hash":"b7cd9addac4657510db86c61386c4e6f","kubernetes.io/config.mirror":"b7cd9addac4657510db86c61386c4e6f","kubernetes.io/config.seen":"2023-12-25T12:39:31.216607492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1225 12:39:51.839381 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:51.839398 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:51.839408 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:51.839416 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:51.841829 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:51.841849 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:51.841858 1463142 round_trippers.go:580]     Audit-Id: 52c0e962-e37b-4eea-8f11-16ef7002840f
	I1225 12:39:51.841866 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:51.841873 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:51.841883 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:51.841897 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:51.841908 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:51 GMT
	I1225 12:39:51.842031 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:39:51.842332 1463142 pod_ready.go:92] pod "kube-apiserver-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:39:51.842353 1463142 pod_ready.go:81] duration metric: took 5.855701ms waiting for pod "kube-apiserver-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:39:51.842366 1463142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:39:51.842445 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-544936
	I1225 12:39:51.842456 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:51.842466 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:51.842486 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:51.844695 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:51.844718 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:51.844728 1463142 round_trippers.go:580]     Audit-Id: 0daf165e-e26b-43a2-9560-65bafff61dd7
	I1225 12:39:51.844734 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:51.844739 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:51.844747 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:51.844752 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:51.844758 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:51 GMT
	I1225 12:39:51.845032 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-544936","namespace":"kube-system","uid":"e8837ba4-e0a0-4bec-a702-df5e7e9ce1c0","resourceVersion":"296","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dcbd1114ea0bb0064cc87c1b2d706f29","kubernetes.io/config.mirror":"dcbd1114ea0bb0064cc87c1b2d706f29","kubernetes.io/config.seen":"2023-12-25T12:39:31.216608577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1225 12:39:51.845441 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:51.845455 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:51.845465 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:51.845473 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:51.847410 1463142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1225 12:39:51.847428 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:51.847438 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:51.847447 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:51.847457 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:51.847464 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:51 GMT
	I1225 12:39:51.847469 1463142 round_trippers.go:580]     Audit-Id: 53aab048-3d68-4867-9e00-27873642cb06
	I1225 12:39:51.847474 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:51.847758 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:39:51.848061 1463142 pod_ready.go:92] pod "kube-controller-manager-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:39:51.848079 1463142 pod_ready.go:81] duration metric: took 5.700571ms waiting for pod "kube-controller-manager-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:39:51.848093 1463142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4jc7" in "kube-system" namespace to be "Ready" ...
	I1225 12:39:51.848156 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4jc7
	I1225 12:39:51.848166 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:51.848177 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:51.848190 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:51.850269 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:51.850288 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:51.850294 1463142 round_trippers.go:580]     Audit-Id: 6e0e2627-dbbe-406e-b6c5-d3a289c7040f
	I1225 12:39:51.850299 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:51.850305 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:51.850313 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:51.850324 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:51.850333 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:51 GMT
	I1225 12:39:51.850675 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k4jc7","generateName":"kube-proxy-","namespace":"kube-system","uid":"14699a0d-601b-4bc3-9584-7ac67822a926","resourceVersion":"405","creationTimestamp":"2023-12-25T12:39:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1225 12:39:51.910308 1463142 request.go:629] Waited for 59.236846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:51.910402 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:51.910410 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:51.910422 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:51.910441 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:51.913385 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:51.913407 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:51.913414 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:51.913420 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:51.913425 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:51.913430 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:51.913438 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:51 GMT
	I1225 12:39:51.913443 1463142 round_trippers.go:580]     Audit-Id: a321313d-649e-4296-8e56-2e96034523e0
	I1225 12:39:51.913677 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:39:51.914145 1463142 pod_ready.go:92] pod "kube-proxy-k4jc7" in "kube-system" namespace has status "Ready":"True"
	I1225 12:39:51.914173 1463142 pod_ready.go:81] duration metric: took 66.068533ms waiting for pod "kube-proxy-k4jc7" in "kube-system" namespace to be "Ready" ...
	I1225 12:39:51.914186 1463142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:39:52.110699 1463142 request.go:629] Waited for 196.403901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-544936
	I1225 12:39:52.110775 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-544936
	I1225 12:39:52.110780 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:52.110788 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:52.110794 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:52.113650 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:52.113689 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:52.113701 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:52.113710 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:52.113719 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:52 GMT
	I1225 12:39:52.113726 1463142 round_trippers.go:580]     Audit-Id: a2fb83b4-3312-4076-81c1-69838ab8d34b
	I1225 12:39:52.113731 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:52.113736 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:52.114075 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-544936","namespace":"kube-system","uid":"e8027489-26d3-44c3-aeea-286e6689e75e","resourceVersion":"299","creationTimestamp":"2023-12-25T12:39:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0d8721061e771e9dc39fa5394fc12b4b","kubernetes.io/config.mirror":"0d8721061e771e9dc39fa5394fc12b4b","kubernetes.io/config.seen":"2023-12-25T12:39:22.819404471Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1225 12:39:52.309793 1463142 request.go:629] Waited for 195.310068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:52.309887 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:39:52.309895 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:52.309907 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:52.309917 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:52.312902 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:52.312919 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:52.312929 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:52.312935 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:52.312940 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:52.312946 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:52 GMT
	I1225 12:39:52.312952 1463142 round_trippers.go:580]     Audit-Id: f1691071-031f-485b-b0e3-df73ed3bd1ff
	I1225 12:39:52.312960 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:52.313177 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:39:52.313497 1463142 pod_ready.go:92] pod "kube-scheduler-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:39:52.313513 1463142 pod_ready.go:81] duration metric: took 399.312493ms waiting for pod "kube-scheduler-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:39:52.313528 1463142 pod_ready.go:38] duration metric: took 1.998910915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:39:52.313546 1463142 api_server.go:52] waiting for apiserver process to appear ...
	I1225 12:39:52.313604 1463142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 12:39:52.329438 1463142 command_runner.go:130] > 1113
	I1225 12:39:52.329593 1463142 api_server.go:72] duration metric: took 8.101932368s to wait for apiserver process to appear ...
	I1225 12:39:52.329614 1463142 api_server.go:88] waiting for apiserver healthz status ...
	I1225 12:39:52.329640 1463142 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I1225 12:39:52.335171 1463142 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I1225 12:39:52.335252 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/version
	I1225 12:39:52.335259 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:52.335272 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:52.335281 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:52.336361 1463142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1225 12:39:52.336376 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:52.336385 1463142 round_trippers.go:580]     Audit-Id: 11b83ccc-dadd-4e02-8337-d9db7f6738c3
	I1225 12:39:52.336393 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:52.336402 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:52.336408 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:52.336413 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:52.336418 1463142 round_trippers.go:580]     Content-Length: 264
	I1225 12:39:52.336427 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:52 GMT
	I1225 12:39:52.336451 1463142 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1225 12:39:52.336573 1463142 api_server.go:141] control plane version: v1.28.4
	I1225 12:39:52.336596 1463142 api_server.go:131] duration metric: took 6.976377ms to wait for apiserver health ...
	I1225 12:39:52.336605 1463142 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 12:39:52.509951 1463142 request.go:629] Waited for 173.261622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:39:52.510045 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:39:52.510051 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:52.510061 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:52.510071 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:52.513502 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:39:52.513523 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:52.513530 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:52.513536 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:52.513549 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:52.513554 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:52.513561 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:52 GMT
	I1225 12:39:52.513568 1463142 round_trippers.go:580]     Audit-Id: fac5a8ba-451a-4b5b-ae19-cf49321ef82d
	I1225 12:39:52.514576 1463142 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"433","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I1225 12:39:52.516374 1463142 system_pods.go:59] 8 kube-system pods found
	I1225 12:39:52.516410 1463142 system_pods.go:61] "coredns-5dd5756b68-mg2zk" [4f4e21f4-8e73-4b81-a080-c42b6980ee3b] Running
	I1225 12:39:52.516415 1463142 system_pods.go:61] "etcd-multinode-544936" [8dc9103e-ec1a-40f4-80f8-4f4918bb5e33] Running
	I1225 12:39:52.516419 1463142 system_pods.go:61] "kindnet-2hjhm" [8cfe7daa-3fc7-485a-8794-117466297c5a] Running
	I1225 12:39:52.516424 1463142 system_pods.go:61] "kube-apiserver-multinode-544936" [d0fda9c8-27cf-4ecc-b379-39745cb7ec19] Running
	I1225 12:39:52.516429 1463142 system_pods.go:61] "kube-controller-manager-multinode-544936" [e8837ba4-e0a0-4bec-a702-df5e7e9ce1c0] Running
	I1225 12:39:52.516433 1463142 system_pods.go:61] "kube-proxy-k4jc7" [14699a0d-601b-4bc3-9584-7ac67822a926] Running
	I1225 12:39:52.516437 1463142 system_pods.go:61] "kube-scheduler-multinode-544936" [e8027489-26d3-44c3-aeea-286e6689e75e] Running
	I1225 12:39:52.516441 1463142 system_pods.go:61] "storage-provisioner" [897346ba-f39d-4771-913e-535bff9ca6b7] Running
	I1225 12:39:52.516449 1463142 system_pods.go:74] duration metric: took 179.83348ms to wait for pod list to return data ...
	I1225 12:39:52.516458 1463142 default_sa.go:34] waiting for default service account to be created ...
	I1225 12:39:52.710513 1463142 request.go:629] Waited for 193.972281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/default/serviceaccounts
	I1225 12:39:52.710578 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/default/serviceaccounts
	I1225 12:39:52.710583 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:52.710592 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:52.710598 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:52.713222 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:39:52.713241 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:52.713248 1463142 round_trippers.go:580]     Content-Length: 261
	I1225 12:39:52.713257 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:52 GMT
	I1225 12:39:52.713263 1463142 round_trippers.go:580]     Audit-Id: bc67f876-e6a7-4ffe-8864-a72aa97942a4
	I1225 12:39:52.713268 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:52.713273 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:52.713278 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:52.713284 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:52.713305 1463142 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c31b3c66-4ba0-4c6f-b7ee-b896b98df101","resourceVersion":"337","creationTimestamp":"2023-12-25T12:39:43Z"}}]}
	I1225 12:39:52.713543 1463142 default_sa.go:45] found service account: "default"
	I1225 12:39:52.713580 1463142 default_sa.go:55] duration metric: took 197.108394ms for default service account to be created ...
	I1225 12:39:52.713594 1463142 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 12:39:52.910005 1463142 request.go:629] Waited for 196.317312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:39:52.910075 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:39:52.910080 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:52.910088 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:52.910095 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:52.918916 1463142 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1225 12:39:52.918941 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:52.918952 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:52.918960 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:52.918967 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:52.918982 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:52 GMT
	I1225 12:39:52.918990 1463142 round_trippers.go:580]     Audit-Id: 67f62359-677f-4ee8-be63-de00f1e65297
	I1225 12:39:52.918998 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:52.921531 1463142 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"433","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I1225 12:39:52.923345 1463142 system_pods.go:86] 8 kube-system pods found
	I1225 12:39:52.923377 1463142 system_pods.go:89] "coredns-5dd5756b68-mg2zk" [4f4e21f4-8e73-4b81-a080-c42b6980ee3b] Running
	I1225 12:39:52.923383 1463142 system_pods.go:89] "etcd-multinode-544936" [8dc9103e-ec1a-40f4-80f8-4f4918bb5e33] Running
	I1225 12:39:52.923388 1463142 system_pods.go:89] "kindnet-2hjhm" [8cfe7daa-3fc7-485a-8794-117466297c5a] Running
	I1225 12:39:52.923393 1463142 system_pods.go:89] "kube-apiserver-multinode-544936" [d0fda9c8-27cf-4ecc-b379-39745cb7ec19] Running
	I1225 12:39:52.923398 1463142 system_pods.go:89] "kube-controller-manager-multinode-544936" [e8837ba4-e0a0-4bec-a702-df5e7e9ce1c0] Running
	I1225 12:39:52.923402 1463142 system_pods.go:89] "kube-proxy-k4jc7" [14699a0d-601b-4bc3-9584-7ac67822a926] Running
	I1225 12:39:52.923407 1463142 system_pods.go:89] "kube-scheduler-multinode-544936" [e8027489-26d3-44c3-aeea-286e6689e75e] Running
	I1225 12:39:52.923411 1463142 system_pods.go:89] "storage-provisioner" [897346ba-f39d-4771-913e-535bff9ca6b7] Running
	I1225 12:39:52.923417 1463142 system_pods.go:126] duration metric: took 209.817928ms to wait for k8s-apps to be running ...
	I1225 12:39:52.923425 1463142 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 12:39:52.923475 1463142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:39:52.937170 1463142 system_svc.go:56] duration metric: took 13.729508ms WaitForService to wait for kubelet.
	I1225 12:39:52.937205 1463142 kubeadm.go:581] duration metric: took 8.709550439s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 12:39:52.937233 1463142 node_conditions.go:102] verifying NodePressure condition ...
	I1225 12:39:53.110664 1463142 request.go:629] Waited for 173.340616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes
	I1225 12:39:53.110759 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes
	I1225 12:39:53.110767 1463142 round_trippers.go:469] Request Headers:
	I1225 12:39:53.110786 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:39:53.110801 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:39:53.113912 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:39:53.113935 1463142 round_trippers.go:577] Response Headers:
	I1225 12:39:53.113942 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:39:53.113948 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:39:53.113953 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:39:53 GMT
	I1225 12:39:53.113958 1463142 round_trippers.go:580]     Audit-Id: b3819ff4-d55a-41cc-ad31-612b45a5940b
	I1225 12:39:53.113964 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:39:53.113971 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:39:53.114138 1463142 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I1225 12:39:53.114578 1463142 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:39:53.114609 1463142 node_conditions.go:123] node cpu capacity is 2
	I1225 12:39:53.114623 1463142 node_conditions.go:105] duration metric: took 177.384497ms to run NodePressure ...
	I1225 12:39:53.114639 1463142 start.go:228] waiting for startup goroutines ...
	I1225 12:39:53.114652 1463142 start.go:233] waiting for cluster config update ...
	I1225 12:39:53.114669 1463142 start.go:242] writing updated cluster config ...
	I1225 12:39:53.116729 1463142 out.go:177] 
	I1225 12:39:53.118124 1463142 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:39:53.118210 1463142 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/config.json ...
	I1225 12:39:53.119959 1463142 out.go:177] * Starting worker node multinode-544936-m02 in cluster multinode-544936
	I1225 12:39:53.121178 1463142 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 12:39:53.121200 1463142 cache.go:56] Caching tarball of preloaded images
	I1225 12:39:53.121311 1463142 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 12:39:53.121324 1463142 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1225 12:39:53.121404 1463142 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/config.json ...
	I1225 12:39:53.121574 1463142 start.go:365] acquiring machines lock for multinode-544936-m02: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 12:39:53.121637 1463142 start.go:369] acquired machines lock for "multinode-544936-m02" in 41.006µs
	I1225 12:39:53.121664 1463142 start.go:93] Provisioning new machine with config: &{Name:multinode-544936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:t
rue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1225 12:39:53.121749 1463142 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1225 12:39:53.123411 1463142 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1225 12:39:53.123554 1463142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:39:53.123601 1463142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:39:53.138677 1463142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I1225 12:39:53.139112 1463142 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:39:53.139579 1463142 main.go:141] libmachine: Using API Version  1
	I1225 12:39:53.139606 1463142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:39:53.139949 1463142 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:39:53.140184 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetMachineName
	I1225 12:39:53.140316 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:39:53.140447 1463142 start.go:159] libmachine.API.Create for "multinode-544936" (driver="kvm2")
	I1225 12:39:53.140478 1463142 client.go:168] LocalClient.Create starting
	I1225 12:39:53.140521 1463142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem
	I1225 12:39:53.140565 1463142 main.go:141] libmachine: Decoding PEM data...
	I1225 12:39:53.140590 1463142 main.go:141] libmachine: Parsing certificate...
	I1225 12:39:53.140660 1463142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem
	I1225 12:39:53.140687 1463142 main.go:141] libmachine: Decoding PEM data...
	I1225 12:39:53.140705 1463142 main.go:141] libmachine: Parsing certificate...
	I1225 12:39:53.140730 1463142 main.go:141] libmachine: Running pre-create checks...
	I1225 12:39:53.140741 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .PreCreateCheck
	I1225 12:39:53.140886 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetConfigRaw
	I1225 12:39:53.141299 1463142 main.go:141] libmachine: Creating machine...
	I1225 12:39:53.141319 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .Create
	I1225 12:39:53.141438 1463142 main.go:141] libmachine: (multinode-544936-m02) Creating KVM machine...
	I1225 12:39:53.142673 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found existing default KVM network
	I1225 12:39:53.142762 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found existing private KVM network mk-multinode-544936
	I1225 12:39:53.142856 1463142 main.go:141] libmachine: (multinode-544936-m02) Setting up store path in /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02 ...
	I1225 12:39:53.142896 1463142 main.go:141] libmachine: (multinode-544936-m02) Building disk image from file:///home/jenkins/minikube-integration/17847-1442600/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I1225 12:39:53.142960 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:53.142843 1463515 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:39:53.143010 1463142 main.go:141] libmachine: (multinode-544936-m02) Downloading /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17847-1442600/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1225 12:39:53.390750 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:53.390609 1463515 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa...
	I1225 12:39:53.485449 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:53.485272 1463515 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/multinode-544936-m02.rawdisk...
	I1225 12:39:53.485507 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Writing magic tar header
	I1225 12:39:53.485537 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Writing SSH key tar header
	I1225 12:39:53.485566 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:53.485421 1463515 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02 ...
	I1225 12:39:53.485581 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02
	I1225 12:39:53.485591 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines
	I1225 12:39:53.485606 1463142 main.go:141] libmachine: (multinode-544936-m02) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02 (perms=drwx------)
	I1225 12:39:53.485630 1463142 main.go:141] libmachine: (multinode-544936-m02) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines (perms=drwxr-xr-x)
	I1225 12:39:53.485645 1463142 main.go:141] libmachine: (multinode-544936-m02) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube (perms=drwxr-xr-x)
	I1225 12:39:53.485656 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:39:53.485684 1463142 main.go:141] libmachine: (multinode-544936-m02) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600 (perms=drwxrwxr-x)
	I1225 12:39:53.485700 1463142 main.go:141] libmachine: (multinode-544936-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1225 12:39:53.485712 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600
	I1225 12:39:53.485725 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1225 12:39:53.485739 1463142 main.go:141] libmachine: (multinode-544936-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1225 12:39:53.485753 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Checking permissions on dir: /home/jenkins
	I1225 12:39:53.485766 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Checking permissions on dir: /home
	I1225 12:39:53.485781 1463142 main.go:141] libmachine: (multinode-544936-m02) Creating domain...
	I1225 12:39:53.485794 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Skipping /home - not owner
	I1225 12:39:53.486808 1463142 main.go:141] libmachine: (multinode-544936-m02) define libvirt domain using xml: 
	I1225 12:39:53.486834 1463142 main.go:141] libmachine: (multinode-544936-m02) <domain type='kvm'>
	I1225 12:39:53.486842 1463142 main.go:141] libmachine: (multinode-544936-m02)   <name>multinode-544936-m02</name>
	I1225 12:39:53.486847 1463142 main.go:141] libmachine: (multinode-544936-m02)   <memory unit='MiB'>2200</memory>
	I1225 12:39:53.486854 1463142 main.go:141] libmachine: (multinode-544936-m02)   <vcpu>2</vcpu>
	I1225 12:39:53.486859 1463142 main.go:141] libmachine: (multinode-544936-m02)   <features>
	I1225 12:39:53.486867 1463142 main.go:141] libmachine: (multinode-544936-m02)     <acpi/>
	I1225 12:39:53.486872 1463142 main.go:141] libmachine: (multinode-544936-m02)     <apic/>
	I1225 12:39:53.486880 1463142 main.go:141] libmachine: (multinode-544936-m02)     <pae/>
	I1225 12:39:53.486885 1463142 main.go:141] libmachine: (multinode-544936-m02)     
	I1225 12:39:53.486892 1463142 main.go:141] libmachine: (multinode-544936-m02)   </features>
	I1225 12:39:53.486898 1463142 main.go:141] libmachine: (multinode-544936-m02)   <cpu mode='host-passthrough'>
	I1225 12:39:53.486904 1463142 main.go:141] libmachine: (multinode-544936-m02)   
	I1225 12:39:53.486912 1463142 main.go:141] libmachine: (multinode-544936-m02)   </cpu>
	I1225 12:39:53.486918 1463142 main.go:141] libmachine: (multinode-544936-m02)   <os>
	I1225 12:39:53.486924 1463142 main.go:141] libmachine: (multinode-544936-m02)     <type>hvm</type>
	I1225 12:39:53.486931 1463142 main.go:141] libmachine: (multinode-544936-m02)     <boot dev='cdrom'/>
	I1225 12:39:53.486936 1463142 main.go:141] libmachine: (multinode-544936-m02)     <boot dev='hd'/>
	I1225 12:39:53.486943 1463142 main.go:141] libmachine: (multinode-544936-m02)     <bootmenu enable='no'/>
	I1225 12:39:53.486948 1463142 main.go:141] libmachine: (multinode-544936-m02)   </os>
	I1225 12:39:53.486986 1463142 main.go:141] libmachine: (multinode-544936-m02)   <devices>
	I1225 12:39:53.487013 1463142 main.go:141] libmachine: (multinode-544936-m02)     <disk type='file' device='cdrom'>
	I1225 12:39:53.487028 1463142 main.go:141] libmachine: (multinode-544936-m02)       <source file='/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/boot2docker.iso'/>
	I1225 12:39:53.487042 1463142 main.go:141] libmachine: (multinode-544936-m02)       <target dev='hdc' bus='scsi'/>
	I1225 12:39:53.487055 1463142 main.go:141] libmachine: (multinode-544936-m02)       <readonly/>
	I1225 12:39:53.487067 1463142 main.go:141] libmachine: (multinode-544936-m02)     </disk>
	I1225 12:39:53.487079 1463142 main.go:141] libmachine: (multinode-544936-m02)     <disk type='file' device='disk'>
	I1225 12:39:53.487096 1463142 main.go:141] libmachine: (multinode-544936-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1225 12:39:53.487112 1463142 main.go:141] libmachine: (multinode-544936-m02)       <source file='/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/multinode-544936-m02.rawdisk'/>
	I1225 12:39:53.487124 1463142 main.go:141] libmachine: (multinode-544936-m02)       <target dev='hda' bus='virtio'/>
	I1225 12:39:53.487135 1463142 main.go:141] libmachine: (multinode-544936-m02)     </disk>
	I1225 12:39:53.487148 1463142 main.go:141] libmachine: (multinode-544936-m02)     <interface type='network'>
	I1225 12:39:53.487184 1463142 main.go:141] libmachine: (multinode-544936-m02)       <source network='mk-multinode-544936'/>
	I1225 12:39:53.487200 1463142 main.go:141] libmachine: (multinode-544936-m02)       <model type='virtio'/>
	I1225 12:39:53.487214 1463142 main.go:141] libmachine: (multinode-544936-m02)     </interface>
	I1225 12:39:53.487225 1463142 main.go:141] libmachine: (multinode-544936-m02)     <interface type='network'>
	I1225 12:39:53.487238 1463142 main.go:141] libmachine: (multinode-544936-m02)       <source network='default'/>
	I1225 12:39:53.487250 1463142 main.go:141] libmachine: (multinode-544936-m02)       <model type='virtio'/>
	I1225 12:39:53.487271 1463142 main.go:141] libmachine: (multinode-544936-m02)     </interface>
	I1225 12:39:53.487285 1463142 main.go:141] libmachine: (multinode-544936-m02)     <serial type='pty'>
	I1225 12:39:53.487329 1463142 main.go:141] libmachine: (multinode-544936-m02)       <target port='0'/>
	I1225 12:39:53.487354 1463142 main.go:141] libmachine: (multinode-544936-m02)     </serial>
	I1225 12:39:53.487370 1463142 main.go:141] libmachine: (multinode-544936-m02)     <console type='pty'>
	I1225 12:39:53.487389 1463142 main.go:141] libmachine: (multinode-544936-m02)       <target type='serial' port='0'/>
	I1225 12:39:53.487403 1463142 main.go:141] libmachine: (multinode-544936-m02)     </console>
	I1225 12:39:53.487416 1463142 main.go:141] libmachine: (multinode-544936-m02)     <rng model='virtio'>
	I1225 12:39:53.487432 1463142 main.go:141] libmachine: (multinode-544936-m02)       <backend model='random'>/dev/random</backend>
	I1225 12:39:53.487444 1463142 main.go:141] libmachine: (multinode-544936-m02)     </rng>
	I1225 12:39:53.487455 1463142 main.go:141] libmachine: (multinode-544936-m02)     
	I1225 12:39:53.487469 1463142 main.go:141] libmachine: (multinode-544936-m02)     
	I1225 12:39:53.487486 1463142 main.go:141] libmachine: (multinode-544936-m02)   </devices>
	I1225 12:39:53.487506 1463142 main.go:141] libmachine: (multinode-544936-m02) </domain>
	I1225 12:39:53.487519 1463142 main.go:141] libmachine: (multinode-544936-m02) 
	I1225 12:39:53.495784 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:4b:a2:b7 in network default
	I1225 12:39:53.496369 1463142 main.go:141] libmachine: (multinode-544936-m02) Ensuring networks are active...
	I1225 12:39:53.496392 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:39:53.497276 1463142 main.go:141] libmachine: (multinode-544936-m02) Ensuring network default is active
	I1225 12:39:53.497590 1463142 main.go:141] libmachine: (multinode-544936-m02) Ensuring network mk-multinode-544936 is active
	I1225 12:39:53.497934 1463142 main.go:141] libmachine: (multinode-544936-m02) Getting domain xml...
	I1225 12:39:53.498666 1463142 main.go:141] libmachine: (multinode-544936-m02) Creating domain...
	I1225 12:39:54.771809 1463142 main.go:141] libmachine: (multinode-544936-m02) Waiting to get IP...
	I1225 12:39:54.772667 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:39:54.773161 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:39:54.773208 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:54.773144 1463515 retry.go:31] will retry after 285.709451ms: waiting for machine to come up
	I1225 12:39:55.060830 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:39:55.061402 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:39:55.061427 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:55.061348 1463515 retry.go:31] will retry after 262.267722ms: waiting for machine to come up
	I1225 12:39:55.324759 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:39:55.325161 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:39:55.325190 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:55.325111 1463515 retry.go:31] will retry after 452.517395ms: waiting for machine to come up
	I1225 12:39:55.778759 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:39:55.779253 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:39:55.779285 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:55.779198 1463515 retry.go:31] will retry after 398.813154ms: waiting for machine to come up
	I1225 12:39:56.180046 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:39:56.180596 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:39:56.180619 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:56.180540 1463515 retry.go:31] will retry after 556.328062ms: waiting for machine to come up
	I1225 12:39:56.738419 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:39:56.738903 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:39:56.738931 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:56.738861 1463515 retry.go:31] will retry after 616.916364ms: waiting for machine to come up
	I1225 12:39:57.357206 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:39:57.357665 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:39:57.357690 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:57.357615 1463515 retry.go:31] will retry after 1.124958355s: waiting for machine to come up
	I1225 12:39:58.484437 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:39:58.484916 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:39:58.484935 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:58.484862 1463515 retry.go:31] will retry after 1.032252169s: waiting for machine to come up
	I1225 12:39:59.519099 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:39:59.519549 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:39:59.519579 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:39:59.519487 1463515 retry.go:31] will retry after 1.318372855s: waiting for machine to come up
	I1225 12:40:00.840057 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:00.840538 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:40:00.840564 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:40:00.840506 1463515 retry.go:31] will retry after 1.734819264s: waiting for machine to come up
	I1225 12:40:02.576912 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:02.577335 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:40:02.577367 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:40:02.577282 1463515 retry.go:31] will retry after 1.842436085s: waiting for machine to come up
	I1225 12:40:04.421122 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:04.421560 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:40:04.421592 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:40:04.421496 1463515 retry.go:31] will retry after 3.216104845s: waiting for machine to come up
	I1225 12:40:07.642058 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:07.642561 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:40:07.642585 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:40:07.642516 1463515 retry.go:31] will retry after 4.175676731s: waiting for machine to come up
	I1225 12:40:11.822863 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:11.823312 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find current IP address of domain multinode-544936-m02 in network mk-multinode-544936
	I1225 12:40:11.823341 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | I1225 12:40:11.823268 1463515 retry.go:31] will retry after 4.269537832s: waiting for machine to come up
	I1225 12:40:16.096966 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:16.097468 1463142 main.go:141] libmachine: (multinode-544936-m02) Found IP for machine: 192.168.39.205
	I1225 12:40:16.097499 1463142 main.go:141] libmachine: (multinode-544936-m02) Reserving static IP address...
	I1225 12:40:16.097515 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has current primary IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:16.097912 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find host DHCP lease matching {name: "multinode-544936-m02", mac: "52:54:00:7c:ce:ff", ip: "192.168.39.205"} in network mk-multinode-544936
	I1225 12:40:16.191293 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Getting to WaitForSSH function...
	I1225 12:40:16.191331 1463142 main.go:141] libmachine: (multinode-544936-m02) Reserved static IP address: 192.168.39.205
	I1225 12:40:16.191345 1463142 main.go:141] libmachine: (multinode-544936-m02) Waiting for SSH to be available...
	I1225 12:40:16.194668 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:16.195087 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936
	I1225 12:40:16.195122 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | unable to find defined IP address of network mk-multinode-544936 interface with MAC address 52:54:00:7c:ce:ff
	I1225 12:40:16.195253 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Using SSH client type: external
	I1225 12:40:16.195295 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa (-rw-------)
	I1225 12:40:16.195333 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 12:40:16.195346 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | About to run SSH command:
	I1225 12:40:16.195367 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | exit 0
	I1225 12:40:16.199557 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | SSH cmd err, output: exit status 255: 
	I1225 12:40:16.199584 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1225 12:40:16.199597 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | command : exit 0
	I1225 12:40:16.199606 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | err     : exit status 255
	I1225 12:40:16.199617 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | output  : 
	I1225 12:40:19.199801 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Getting to WaitForSSH function...
	I1225 12:40:19.202583 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.202993 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:19.203031 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.203182 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Using SSH client type: external
	I1225 12:40:19.203209 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa (-rw-------)
	I1225 12:40:19.203268 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 12:40:19.203298 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | About to run SSH command:
	I1225 12:40:19.203324 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | exit 0
	I1225 12:40:19.294537 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | SSH cmd err, output: <nil>: 
	I1225 12:40:19.294828 1463142 main.go:141] libmachine: (multinode-544936-m02) KVM machine creation complete!
	I1225 12:40:19.295141 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetConfigRaw
	I1225 12:40:19.295754 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:40:19.295964 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:40:19.296119 1463142 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1225 12:40:19.296131 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetState
	I1225 12:40:19.297691 1463142 main.go:141] libmachine: Detecting operating system of created instance...
	I1225 12:40:19.297707 1463142 main.go:141] libmachine: Waiting for SSH to be available...
	I1225 12:40:19.297714 1463142 main.go:141] libmachine: Getting to WaitForSSH function...
	I1225 12:40:19.297724 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:40:19.300208 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.300605 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:19.300635 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.300800 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:40:19.300991 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:19.301142 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:19.301307 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:40:19.301451 1463142 main.go:141] libmachine: Using SSH client type: native
	I1225 12:40:19.301874 1463142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1225 12:40:19.301889 1463142 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1225 12:40:19.426079 1463142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 12:40:19.426108 1463142 main.go:141] libmachine: Detecting the provisioner...
	I1225 12:40:19.426117 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:40:19.429713 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.430383 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:19.430415 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.430566 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:40:19.430789 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:19.430961 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:19.431084 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:40:19.431255 1463142 main.go:141] libmachine: Using SSH client type: native
	I1225 12:40:19.431570 1463142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1225 12:40:19.431582 1463142 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1225 12:40:19.555975 1463142 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1225 12:40:19.556078 1463142 main.go:141] libmachine: found compatible host: buildroot
	I1225 12:40:19.556097 1463142 main.go:141] libmachine: Provisioning with buildroot...
	I1225 12:40:19.556120 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetMachineName
	I1225 12:40:19.556470 1463142 buildroot.go:166] provisioning hostname "multinode-544936-m02"
	I1225 12:40:19.556499 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetMachineName
	I1225 12:40:19.556666 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:40:19.559320 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.559681 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:19.559711 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.559880 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:40:19.560050 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:19.560181 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:19.560335 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:40:19.560600 1463142 main.go:141] libmachine: Using SSH client type: native
	I1225 12:40:19.560975 1463142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1225 12:40:19.560990 1463142 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-544936-m02 && echo "multinode-544936-m02" | sudo tee /etc/hostname
	I1225 12:40:19.699706 1463142 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-544936-m02
	
	I1225 12:40:19.699733 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:40:19.702788 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.703263 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:19.703302 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.703529 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:40:19.703777 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:19.703985 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:19.704112 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:40:19.704305 1463142 main.go:141] libmachine: Using SSH client type: native
	I1225 12:40:19.704674 1463142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1225 12:40:19.704696 1463142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-544936-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-544936-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-544936-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 12:40:19.839805 1463142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 12:40:19.839841 1463142 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 12:40:19.839857 1463142 buildroot.go:174] setting up certificates
	I1225 12:40:19.839868 1463142 provision.go:83] configureAuth start
	I1225 12:40:19.839881 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetMachineName
	I1225 12:40:19.840238 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetIP
	I1225 12:40:19.843308 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.843751 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:19.843783 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.843909 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:40:19.846114 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.846529 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:19.846571 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.846687 1463142 provision.go:138] copyHostCerts
	I1225 12:40:19.846728 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 12:40:19.846768 1463142 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 12:40:19.846779 1463142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 12:40:19.846850 1463142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 12:40:19.846935 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 12:40:19.846952 1463142 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 12:40:19.846959 1463142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 12:40:19.846982 1463142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 12:40:19.847030 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 12:40:19.847046 1463142 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 12:40:19.847052 1463142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 12:40:19.847073 1463142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 12:40:19.847121 1463142 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.multinode-544936-m02 san=[192.168.39.205 192.168.39.205 localhost 127.0.0.1 minikube multinode-544936-m02]
	I1225 12:40:19.994812 1463142 provision.go:172] copyRemoteCerts
	I1225 12:40:19.994881 1463142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 12:40:19.994910 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:40:19.998298 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.998727 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:19.998757 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:19.998987 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:40:19.999230 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:19.999457 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:40:19.999657 1463142 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa Username:docker}
	I1225 12:40:20.092086 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1225 12:40:20.092175 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 12:40:20.116374 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1225 12:40:20.116446 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1225 12:40:20.140135 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1225 12:40:20.140213 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 12:40:20.164255 1463142 provision.go:86] duration metric: configureAuth took 324.369757ms
	I1225 12:40:20.164284 1463142 buildroot.go:189] setting minikube options for container-runtime
	I1225 12:40:20.164464 1463142 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:40:20.164547 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:40:20.167350 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.167772 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:20.167812 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.168003 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:40:20.168359 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:20.168607 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:20.168792 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:40:20.168958 1463142 main.go:141] libmachine: Using SSH client type: native
	I1225 12:40:20.169305 1463142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1225 12:40:20.169323 1463142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 12:40:20.488680 1463142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 12:40:20.488720 1463142 main.go:141] libmachine: Checking connection to Docker...
	I1225 12:40:20.488733 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetURL
	I1225 12:40:20.490044 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | Using libvirt version 6000000
	I1225 12:40:20.492590 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.492939 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:20.492976 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.493138 1463142 main.go:141] libmachine: Docker is up and running!
	I1225 12:40:20.493152 1463142 main.go:141] libmachine: Reticulating splines...
	I1225 12:40:20.493159 1463142 client.go:171] LocalClient.Create took 27.352670364s
	I1225 12:40:20.493187 1463142 start.go:167] duration metric: libmachine.API.Create for "multinode-544936" took 27.352741382s
	I1225 12:40:20.493201 1463142 start.go:300] post-start starting for "multinode-544936-m02" (driver="kvm2")
	I1225 12:40:20.493216 1463142 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 12:40:20.493240 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:40:20.493546 1463142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 12:40:20.493581 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:40:20.495775 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.496139 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:20.496161 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.496339 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:40:20.496515 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:20.496672 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:40:20.496833 1463142 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa Username:docker}
	I1225 12:40:20.588723 1463142 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 12:40:20.592889 1463142 command_runner.go:130] > NAME=Buildroot
	I1225 12:40:20.592910 1463142 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1225 12:40:20.592915 1463142 command_runner.go:130] > ID=buildroot
	I1225 12:40:20.592920 1463142 command_runner.go:130] > VERSION_ID=2021.02.12
	I1225 12:40:20.592925 1463142 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1225 12:40:20.593172 1463142 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 12:40:20.593199 1463142 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 12:40:20.593272 1463142 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 12:40:20.593341 1463142 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 12:40:20.593351 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> /etc/ssl/certs/14497972.pem
	I1225 12:40:20.593458 1463142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 12:40:20.602381 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 12:40:20.624881 1463142 start.go:303] post-start completed in 131.66229ms
	I1225 12:40:20.624942 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetConfigRaw
	I1225 12:40:20.625631 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetIP
	I1225 12:40:20.628474 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.628881 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:20.628914 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.629177 1463142 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/config.json ...
	I1225 12:40:20.629363 1463142 start.go:128] duration metric: createHost completed in 27.507601839s
	I1225 12:40:20.629386 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:40:20.631600 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.631937 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:20.631968 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.632092 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:40:20.632276 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:20.632456 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:20.632632 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:40:20.632827 1463142 main.go:141] libmachine: Using SSH client type: native
	I1225 12:40:20.633275 1463142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1225 12:40:20.633292 1463142 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 12:40:20.759294 1463142 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703508020.745307108
	
	I1225 12:40:20.759319 1463142 fix.go:206] guest clock: 1703508020.745307108
	I1225 12:40:20.759330 1463142 fix.go:219] Guest: 2023-12-25 12:40:20.745307108 +0000 UTC Remote: 2023-12-25 12:40:20.629375021 +0000 UTC m=+93.133391713 (delta=115.932087ms)
	I1225 12:40:20.759366 1463142 fix.go:190] guest clock delta is within tolerance: 115.932087ms
	I1225 12:40:20.759374 1463142 start.go:83] releasing machines lock for "multinode-544936-m02", held for 27.63772435s
	I1225 12:40:20.759402 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:40:20.759745 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetIP
	I1225 12:40:20.762660 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.763008 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:20.763032 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.765499 1463142 out.go:177] * Found network options:
	I1225 12:40:20.767010 1463142 out.go:177]   - NO_PROXY=192.168.39.21
	W1225 12:40:20.768328 1463142 proxy.go:119] fail to check proxy env: Error ip not in block
	I1225 12:40:20.768361 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:40:20.768937 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:40:20.769134 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:40:20.769261 1463142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 12:40:20.769308 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	W1225 12:40:20.769326 1463142 proxy.go:119] fail to check proxy env: Error ip not in block
	I1225 12:40:20.769408 1463142 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 12:40:20.769429 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:40:20.771957 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.772305 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.772339 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:20.772360 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.772582 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:40:20.772798 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:20.772863 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:20.772902 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:20.772974 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:40:20.773025 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:40:20.773150 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:40:20.773143 1463142 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa Username:docker}
	I1225 12:40:20.773282 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:40:20.773409 1463142 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa Username:docker}
	I1225 12:40:20.887236 1463142 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1225 12:40:21.016863 1463142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1225 12:40:21.023134 1463142 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1225 12:40:21.023287 1463142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 12:40:21.023343 1463142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 12:40:21.041846 1463142 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1225 12:40:21.041917 1463142 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 12:40:21.041927 1463142 start.go:475] detecting cgroup driver to use...
	I1225 12:40:21.042004 1463142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 12:40:21.060816 1463142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 12:40:21.076685 1463142 docker.go:203] disabling cri-docker service (if available) ...
	I1225 12:40:21.076763 1463142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 12:40:21.092211 1463142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 12:40:21.108956 1463142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 12:40:21.234514 1463142 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1225 12:40:21.234600 1463142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 12:40:21.248806 1463142 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1225 12:40:21.362945 1463142 docker.go:219] disabling docker service ...
	I1225 12:40:21.363023 1463142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 12:40:21.377600 1463142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 12:40:21.389524 1463142 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1225 12:40:21.389730 1463142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 12:40:21.403651 1463142 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1225 12:40:21.512844 1463142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 12:40:21.633833 1463142 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1225 12:40:21.633864 1463142 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1225 12:40:21.633930 1463142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 12:40:21.647657 1463142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 12:40:21.664797 1463142 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1225 12:40:21.665268 1463142 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 12:40:21.665338 1463142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:40:21.674801 1463142 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 12:40:21.674883 1463142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:40:21.684479 1463142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:40:21.693696 1463142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:40:21.704073 1463142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 12:40:21.715254 1463142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 12:40:21.724105 1463142 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 12:40:21.724148 1463142 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 12:40:21.724195 1463142 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 12:40:21.737789 1463142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 12:40:21.746957 1463142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 12:40:21.872844 1463142 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 12:40:22.041026 1463142 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 12:40:22.041102 1463142 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 12:40:22.046231 1463142 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1225 12:40:22.046264 1463142 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1225 12:40:22.046274 1463142 command_runner.go:130] > Device: 16h/22d	Inode: 715         Links: 1
	I1225 12:40:22.046291 1463142 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1225 12:40:22.046300 1463142 command_runner.go:130] > Access: 2023-12-25 12:40:22.015656530 +0000
	I1225 12:40:22.046313 1463142 command_runner.go:130] > Modify: 2023-12-25 12:40:22.015656530 +0000
	I1225 12:40:22.046325 1463142 command_runner.go:130] > Change: 2023-12-25 12:40:22.015656530 +0000
	I1225 12:40:22.046335 1463142 command_runner.go:130] >  Birth: -
	I1225 12:40:22.046364 1463142 start.go:543] Will wait 60s for crictl version
	I1225 12:40:22.046420 1463142 ssh_runner.go:195] Run: which crictl
	I1225 12:40:22.050352 1463142 command_runner.go:130] > /usr/bin/crictl
	I1225 12:40:22.050507 1463142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 12:40:22.090420 1463142 command_runner.go:130] > Version:  0.1.0
	I1225 12:40:22.090470 1463142 command_runner.go:130] > RuntimeName:  cri-o
	I1225 12:40:22.090479 1463142 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1225 12:40:22.090488 1463142 command_runner.go:130] > RuntimeApiVersion:  v1
	I1225 12:40:22.092023 1463142 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 12:40:22.092093 1463142 ssh_runner.go:195] Run: crio --version
	I1225 12:40:22.134898 1463142 command_runner.go:130] > crio version 1.24.1
	I1225 12:40:22.134929 1463142 command_runner.go:130] > Version:          1.24.1
	I1225 12:40:22.134936 1463142 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1225 12:40:22.134940 1463142 command_runner.go:130] > GitTreeState:     dirty
	I1225 12:40:22.134947 1463142 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I1225 12:40:22.134952 1463142 command_runner.go:130] > GoVersion:        go1.19.9
	I1225 12:40:22.134956 1463142 command_runner.go:130] > Compiler:         gc
	I1225 12:40:22.134960 1463142 command_runner.go:130] > Platform:         linux/amd64
	I1225 12:40:22.134966 1463142 command_runner.go:130] > Linkmode:         dynamic
	I1225 12:40:22.134973 1463142 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1225 12:40:22.134977 1463142 command_runner.go:130] > SeccompEnabled:   true
	I1225 12:40:22.134982 1463142 command_runner.go:130] > AppArmorEnabled:  false
	I1225 12:40:22.136385 1463142 ssh_runner.go:195] Run: crio --version
	I1225 12:40:22.181286 1463142 command_runner.go:130] > crio version 1.24.1
	I1225 12:40:22.181313 1463142 command_runner.go:130] > Version:          1.24.1
	I1225 12:40:22.181320 1463142 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1225 12:40:22.181325 1463142 command_runner.go:130] > GitTreeState:     dirty
	I1225 12:40:22.181334 1463142 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I1225 12:40:22.181341 1463142 command_runner.go:130] > GoVersion:        go1.19.9
	I1225 12:40:22.181348 1463142 command_runner.go:130] > Compiler:         gc
	I1225 12:40:22.181355 1463142 command_runner.go:130] > Platform:         linux/amd64
	I1225 12:40:22.181365 1463142 command_runner.go:130] > Linkmode:         dynamic
	I1225 12:40:22.181375 1463142 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1225 12:40:22.181384 1463142 command_runner.go:130] > SeccompEnabled:   true
	I1225 12:40:22.181391 1463142 command_runner.go:130] > AppArmorEnabled:  false
	I1225 12:40:22.184711 1463142 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 12:40:22.186389 1463142 out.go:177]   - env NO_PROXY=192.168.39.21
	I1225 12:40:22.187841 1463142 main.go:141] libmachine: (multinode-544936-m02) Calling .GetIP
	I1225 12:40:22.190542 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:22.190906 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:40:22.190936 1463142 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:40:22.191135 1463142 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 12:40:22.195524 1463142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 12:40:22.209549 1463142 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936 for IP: 192.168.39.205
	I1225 12:40:22.209582 1463142 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:40:22.209740 1463142 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 12:40:22.209794 1463142 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 12:40:22.209805 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1225 12:40:22.209819 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1225 12:40:22.209832 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1225 12:40:22.209843 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1225 12:40:22.209916 1463142 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 12:40:22.209951 1463142 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 12:40:22.209982 1463142 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 12:40:22.210014 1463142 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 12:40:22.210043 1463142 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 12:40:22.210079 1463142 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 12:40:22.210140 1463142 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 12:40:22.210183 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem -> /usr/share/ca-certificates/1449797.pem
	I1225 12:40:22.210200 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> /usr/share/ca-certificates/14497972.pem
	I1225 12:40:22.210218 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:40:22.210715 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 12:40:22.234663 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 12:40:22.259580 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 12:40:22.284081 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 12:40:22.309056 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 12:40:22.333058 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 12:40:22.357444 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 12:40:22.380548 1463142 ssh_runner.go:195] Run: openssl version
	I1225 12:40:22.385748 1463142 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1225 12:40:22.386028 1463142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 12:40:22.395421 1463142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:40:22.399895 1463142 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:40:22.400242 1463142 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:40:22.400312 1463142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:40:22.405704 1463142 command_runner.go:130] > b5213941
	I1225 12:40:22.406025 1463142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 12:40:22.416585 1463142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 12:40:22.426603 1463142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 12:40:22.431265 1463142 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 12:40:22.431522 1463142 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 12:40:22.431595 1463142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 12:40:22.436938 1463142 command_runner.go:130] > 51391683
	I1225 12:40:22.437312 1463142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 12:40:22.447832 1463142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 12:40:22.458701 1463142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 12:40:22.463614 1463142 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 12:40:22.463958 1463142 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 12:40:22.464032 1463142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 12:40:22.470046 1463142 command_runner.go:130] > 3ec20f2e
	I1225 12:40:22.470340 1463142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 12:40:22.480215 1463142 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 12:40:22.484328 1463142 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1225 12:40:22.484496 1463142 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1225 12:40:22.484586 1463142 ssh_runner.go:195] Run: crio config
	I1225 12:40:22.548161 1463142 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1225 12:40:22.548197 1463142 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1225 12:40:22.548207 1463142 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1225 12:40:22.548213 1463142 command_runner.go:130] > #
	I1225 12:40:22.548225 1463142 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1225 12:40:22.548236 1463142 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1225 12:40:22.548248 1463142 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1225 12:40:22.548263 1463142 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1225 12:40:22.548273 1463142 command_runner.go:130] > # reload'.
	I1225 12:40:22.548284 1463142 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1225 12:40:22.548297 1463142 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1225 12:40:22.548307 1463142 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1225 12:40:22.548317 1463142 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1225 12:40:22.548326 1463142 command_runner.go:130] > [crio]
	I1225 12:40:22.548337 1463142 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1225 12:40:22.548351 1463142 command_runner.go:130] > # containers images, in this directory.
	I1225 12:40:22.548359 1463142 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1225 12:40:22.548372 1463142 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1225 12:40:22.548380 1463142 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1225 12:40:22.548390 1463142 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1225 12:40:22.548399 1463142 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1225 12:40:22.548407 1463142 command_runner.go:130] > storage_driver = "overlay"
	I1225 12:40:22.548419 1463142 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1225 12:40:22.548430 1463142 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1225 12:40:22.548437 1463142 command_runner.go:130] > storage_option = [
	I1225 12:40:22.548444 1463142 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1225 12:40:22.548453 1463142 command_runner.go:130] > ]
	I1225 12:40:22.548464 1463142 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1225 12:40:22.548477 1463142 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1225 12:40:22.548489 1463142 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1225 12:40:22.548501 1463142 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1225 12:40:22.548514 1463142 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1225 12:40:22.548524 1463142 command_runner.go:130] > # always happen on a node reboot
	I1225 12:40:22.548532 1463142 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1225 12:40:22.548545 1463142 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1225 12:40:22.548555 1463142 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1225 12:40:22.548569 1463142 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1225 12:40:22.548582 1463142 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1225 12:40:22.548594 1463142 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1225 12:40:22.548607 1463142 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1225 12:40:22.548617 1463142 command_runner.go:130] > # internal_wipe = true
	I1225 12:40:22.548627 1463142 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1225 12:40:22.548639 1463142 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1225 12:40:22.548651 1463142 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1225 12:40:22.548663 1463142 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1225 12:40:22.548677 1463142 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1225 12:40:22.548683 1463142 command_runner.go:130] > [crio.api]
	I1225 12:40:22.548692 1463142 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1225 12:40:22.548709 1463142 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1225 12:40:22.548720 1463142 command_runner.go:130] > # IP address on which the stream server will listen.
	I1225 12:40:22.548727 1463142 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1225 12:40:22.548741 1463142 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1225 12:40:22.548753 1463142 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1225 12:40:22.548759 1463142 command_runner.go:130] > # stream_port = "0"
	I1225 12:40:22.548768 1463142 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1225 12:40:22.548778 1463142 command_runner.go:130] > # stream_enable_tls = false
	I1225 12:40:22.548788 1463142 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1225 12:40:22.548798 1463142 command_runner.go:130] > # stream_idle_timeout = ""
	I1225 12:40:22.548813 1463142 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1225 12:40:22.548826 1463142 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1225 12:40:22.548835 1463142 command_runner.go:130] > # minutes.
	I1225 12:40:22.548842 1463142 command_runner.go:130] > # stream_tls_cert = ""
	I1225 12:40:22.548854 1463142 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1225 12:40:22.548868 1463142 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1225 12:40:22.548875 1463142 command_runner.go:130] > # stream_tls_key = ""
	I1225 12:40:22.548886 1463142 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1225 12:40:22.548895 1463142 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1225 12:40:22.548901 1463142 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1225 12:40:22.548907 1463142 command_runner.go:130] > # stream_tls_ca = ""
	I1225 12:40:22.548915 1463142 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1225 12:40:22.548920 1463142 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1225 12:40:22.548929 1463142 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1225 12:40:22.548936 1463142 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1225 12:40:22.548950 1463142 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1225 12:40:22.548958 1463142 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1225 12:40:22.548962 1463142 command_runner.go:130] > [crio.runtime]
	I1225 12:40:22.548968 1463142 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1225 12:40:22.548974 1463142 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1225 12:40:22.548981 1463142 command_runner.go:130] > # "nofile=1024:2048"
	I1225 12:40:22.548991 1463142 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1225 12:40:22.549001 1463142 command_runner.go:130] > # default_ulimits = [
	I1225 12:40:22.551373 1463142 command_runner.go:130] > # ]
	I1225 12:40:22.551394 1463142 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1225 12:40:22.551405 1463142 command_runner.go:130] > # no_pivot = false
	I1225 12:40:22.551419 1463142 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1225 12:40:22.551432 1463142 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1225 12:40:22.551444 1463142 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1225 12:40:22.551458 1463142 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1225 12:40:22.551469 1463142 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1225 12:40:22.551487 1463142 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1225 12:40:22.551495 1463142 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1225 12:40:22.551503 1463142 command_runner.go:130] > # Cgroup setting for conmon
	I1225 12:40:22.551514 1463142 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1225 12:40:22.551521 1463142 command_runner.go:130] > conmon_cgroup = "pod"
	I1225 12:40:22.551533 1463142 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1225 12:40:22.551542 1463142 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1225 12:40:22.551554 1463142 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1225 12:40:22.551566 1463142 command_runner.go:130] > conmon_env = [
	I1225 12:40:22.551576 1463142 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1225 12:40:22.551583 1463142 command_runner.go:130] > ]
	I1225 12:40:22.551593 1463142 command_runner.go:130] > # Additional environment variables to set for all the
	I1225 12:40:22.551604 1463142 command_runner.go:130] > # containers. These are overridden if set in the
	I1225 12:40:22.551619 1463142 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1225 12:40:22.551628 1463142 command_runner.go:130] > # default_env = [
	I1225 12:40:22.551635 1463142 command_runner.go:130] > # ]
	I1225 12:40:22.551647 1463142 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1225 12:40:22.551654 1463142 command_runner.go:130] > # selinux = false
	I1225 12:40:22.551665 1463142 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1225 12:40:22.551678 1463142 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1225 12:40:22.551686 1463142 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1225 12:40:22.551695 1463142 command_runner.go:130] > # seccomp_profile = ""
	I1225 12:40:22.551704 1463142 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1225 12:40:22.551717 1463142 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1225 12:40:22.551730 1463142 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1225 12:40:22.551747 1463142 command_runner.go:130] > # which might increase security.
	I1225 12:40:22.551758 1463142 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1225 12:40:22.551768 1463142 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1225 12:40:22.551781 1463142 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1225 12:40:22.551794 1463142 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1225 12:40:22.551806 1463142 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1225 12:40:22.551817 1463142 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:40:22.551828 1463142 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1225 12:40:22.551840 1463142 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1225 12:40:22.551850 1463142 command_runner.go:130] > # the cgroup blockio controller.
	I1225 12:40:22.551856 1463142 command_runner.go:130] > # blockio_config_file = ""
	I1225 12:40:22.551869 1463142 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1225 12:40:22.551879 1463142 command_runner.go:130] > # irqbalance daemon.
	I1225 12:40:22.551891 1463142 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1225 12:40:22.551903 1463142 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1225 12:40:22.551914 1463142 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:40:22.551923 1463142 command_runner.go:130] > # rdt_config_file = ""
	I1225 12:40:22.551937 1463142 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1225 12:40:22.551948 1463142 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1225 12:40:22.551962 1463142 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1225 12:40:22.551972 1463142 command_runner.go:130] > # separate_pull_cgroup = ""
	I1225 12:40:22.551986 1463142 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1225 12:40:22.551999 1463142 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1225 12:40:22.552012 1463142 command_runner.go:130] > # will be added.
	I1225 12:40:22.552022 1463142 command_runner.go:130] > # default_capabilities = [
	I1225 12:40:22.552032 1463142 command_runner.go:130] > # 	"CHOWN",
	I1225 12:40:22.552039 1463142 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1225 12:40:22.552049 1463142 command_runner.go:130] > # 	"FSETID",
	I1225 12:40:22.552055 1463142 command_runner.go:130] > # 	"FOWNER",
	I1225 12:40:22.552064 1463142 command_runner.go:130] > # 	"SETGID",
	I1225 12:40:22.552070 1463142 command_runner.go:130] > # 	"SETUID",
	I1225 12:40:22.552079 1463142 command_runner.go:130] > # 	"SETPCAP",
	I1225 12:40:22.552086 1463142 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1225 12:40:22.552095 1463142 command_runner.go:130] > # 	"KILL",
	I1225 12:40:22.552099 1463142 command_runner.go:130] > # ]
	I1225 12:40:22.552111 1463142 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1225 12:40:22.552121 1463142 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1225 12:40:22.552131 1463142 command_runner.go:130] > # default_sysctls = [
	I1225 12:40:22.552136 1463142 command_runner.go:130] > # ]
	I1225 12:40:22.552146 1463142 command_runner.go:130] > # List of devices on the host that a
	I1225 12:40:22.552158 1463142 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1225 12:40:22.552168 1463142 command_runner.go:130] > # allowed_devices = [
	I1225 12:40:22.552178 1463142 command_runner.go:130] > # 	"/dev/fuse",
	I1225 12:40:22.552187 1463142 command_runner.go:130] > # ]
	I1225 12:40:22.552198 1463142 command_runner.go:130] > # List of additional devices. specified as
	I1225 12:40:22.552214 1463142 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1225 12:40:22.552225 1463142 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1225 12:40:22.552253 1463142 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1225 12:40:22.552262 1463142 command_runner.go:130] > # additional_devices = [
	I1225 12:40:22.552265 1463142 command_runner.go:130] > # ]
	I1225 12:40:22.552270 1463142 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1225 12:40:22.552274 1463142 command_runner.go:130] > # cdi_spec_dirs = [
	I1225 12:40:22.552278 1463142 command_runner.go:130] > # 	"/etc/cdi",
	I1225 12:40:22.552285 1463142 command_runner.go:130] > # 	"/var/run/cdi",
	I1225 12:40:22.552288 1463142 command_runner.go:130] > # ]
	I1225 12:40:22.552297 1463142 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1225 12:40:22.552303 1463142 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1225 12:40:22.552310 1463142 command_runner.go:130] > # Defaults to false.
	I1225 12:40:22.552315 1463142 command_runner.go:130] > # device_ownership_from_security_context = false
	I1225 12:40:22.552324 1463142 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1225 12:40:22.552335 1463142 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1225 12:40:22.552344 1463142 command_runner.go:130] > # hooks_dir = [
	I1225 12:40:22.552351 1463142 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1225 12:40:22.552360 1463142 command_runner.go:130] > # ]
	I1225 12:40:22.552370 1463142 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1225 12:40:22.552384 1463142 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1225 12:40:22.552396 1463142 command_runner.go:130] > # its default mounts from the following two files:
	I1225 12:40:22.552402 1463142 command_runner.go:130] > #
	I1225 12:40:22.552413 1463142 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1225 12:40:22.552426 1463142 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1225 12:40:22.552439 1463142 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1225 12:40:22.552448 1463142 command_runner.go:130] > #
	I1225 12:40:22.552458 1463142 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1225 12:40:22.552469 1463142 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1225 12:40:22.552475 1463142 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1225 12:40:22.552483 1463142 command_runner.go:130] > #      only add mounts it finds in this file.
	I1225 12:40:22.552486 1463142 command_runner.go:130] > #
	I1225 12:40:22.552493 1463142 command_runner.go:130] > # default_mounts_file = ""
	I1225 12:40:22.552499 1463142 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1225 12:40:22.552508 1463142 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1225 12:40:22.552514 1463142 command_runner.go:130] > pids_limit = 1024
	I1225 12:40:22.552520 1463142 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1225 12:40:22.552529 1463142 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1225 12:40:22.552538 1463142 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1225 12:40:22.552554 1463142 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1225 12:40:22.552564 1463142 command_runner.go:130] > # log_size_max = -1
	I1225 12:40:22.552577 1463142 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1225 12:40:22.552588 1463142 command_runner.go:130] > # log_to_journald = false
	I1225 12:40:22.552598 1463142 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1225 12:40:22.552609 1463142 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1225 12:40:22.552621 1463142 command_runner.go:130] > # Path to directory for container attach sockets.
	I1225 12:40:22.552632 1463142 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1225 12:40:22.552643 1463142 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1225 12:40:22.552653 1463142 command_runner.go:130] > # bind_mount_prefix = ""
	I1225 12:40:22.552665 1463142 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1225 12:40:22.552675 1463142 command_runner.go:130] > # read_only = false
	I1225 12:40:22.552687 1463142 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1225 12:40:22.552710 1463142 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1225 12:40:22.552719 1463142 command_runner.go:130] > # live configuration reload.
	I1225 12:40:22.552726 1463142 command_runner.go:130] > # log_level = "info"
	I1225 12:40:22.552738 1463142 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1225 12:40:22.552753 1463142 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:40:22.552762 1463142 command_runner.go:130] > # log_filter = ""
	I1225 12:40:22.552772 1463142 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1225 12:40:22.552785 1463142 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1225 12:40:22.552795 1463142 command_runner.go:130] > # separated by comma.
	I1225 12:40:22.552804 1463142 command_runner.go:130] > # uid_mappings = ""
	I1225 12:40:22.552816 1463142 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1225 12:40:22.552829 1463142 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1225 12:40:22.552839 1463142 command_runner.go:130] > # separated by comma.
	I1225 12:40:22.552849 1463142 command_runner.go:130] > # gid_mappings = ""
	I1225 12:40:22.552859 1463142 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1225 12:40:22.552872 1463142 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1225 12:40:22.552883 1463142 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1225 12:40:22.552893 1463142 command_runner.go:130] > # minimum_mappable_uid = -1
	I1225 12:40:22.552903 1463142 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1225 12:40:22.552916 1463142 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1225 12:40:22.552929 1463142 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1225 12:40:22.552939 1463142 command_runner.go:130] > # minimum_mappable_gid = -1
	I1225 12:40:22.552952 1463142 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1225 12:40:22.552964 1463142 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1225 12:40:22.552977 1463142 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1225 12:40:22.552987 1463142 command_runner.go:130] > # ctr_stop_timeout = 30
	I1225 12:40:22.552999 1463142 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1225 12:40:22.553012 1463142 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1225 12:40:22.553023 1463142 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1225 12:40:22.553031 1463142 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1225 12:40:22.553039 1463142 command_runner.go:130] > drop_infra_ctr = false
	I1225 12:40:22.553045 1463142 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1225 12:40:22.553053 1463142 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1225 12:40:22.553060 1463142 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1225 12:40:22.553067 1463142 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1225 12:40:22.553073 1463142 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1225 12:40:22.553079 1463142 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1225 12:40:22.553084 1463142 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1225 12:40:22.553093 1463142 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1225 12:40:22.553097 1463142 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1225 12:40:22.553103 1463142 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1225 12:40:22.553112 1463142 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1225 12:40:22.553118 1463142 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1225 12:40:22.553124 1463142 command_runner.go:130] > # default_runtime = "runc"
	I1225 12:40:22.553129 1463142 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1225 12:40:22.553139 1463142 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1225 12:40:22.553149 1463142 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1225 12:40:22.553156 1463142 command_runner.go:130] > # creation as a file is not desired either.
	I1225 12:40:22.553164 1463142 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1225 12:40:22.553170 1463142 command_runner.go:130] > # the hostname is being managed dynamically.
	I1225 12:40:22.553174 1463142 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1225 12:40:22.553178 1463142 command_runner.go:130] > # ]
	I1225 12:40:22.553185 1463142 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1225 12:40:22.553193 1463142 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1225 12:40:22.553200 1463142 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1225 12:40:22.553208 1463142 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1225 12:40:22.553212 1463142 command_runner.go:130] > #
	I1225 12:40:22.553217 1463142 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1225 12:40:22.553224 1463142 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1225 12:40:22.553229 1463142 command_runner.go:130] > #  runtime_type = "oci"
	I1225 12:40:22.553236 1463142 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1225 12:40:22.553241 1463142 command_runner.go:130] > #  privileged_without_host_devices = false
	I1225 12:40:22.553246 1463142 command_runner.go:130] > #  allowed_annotations = []
	I1225 12:40:22.553250 1463142 command_runner.go:130] > # Where:
	I1225 12:40:22.553257 1463142 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1225 12:40:22.553263 1463142 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1225 12:40:22.553270 1463142 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1225 12:40:22.553278 1463142 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1225 12:40:22.553282 1463142 command_runner.go:130] > #   in $PATH.
	I1225 12:40:22.553291 1463142 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1225 12:40:22.553298 1463142 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1225 12:40:22.553304 1463142 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1225 12:40:22.553310 1463142 command_runner.go:130] > #   state.
	I1225 12:40:22.553316 1463142 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1225 12:40:22.553322 1463142 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1225 12:40:22.553330 1463142 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1225 12:40:22.553336 1463142 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1225 12:40:22.553344 1463142 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1225 12:40:22.553351 1463142 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1225 12:40:22.553358 1463142 command_runner.go:130] > #   The currently recognized values are:
	I1225 12:40:22.553364 1463142 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1225 12:40:22.553373 1463142 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1225 12:40:22.553383 1463142 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1225 12:40:22.553388 1463142 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1225 12:40:22.553400 1463142 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1225 12:40:22.553408 1463142 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1225 12:40:22.553414 1463142 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1225 12:40:22.553423 1463142 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1225 12:40:22.553428 1463142 command_runner.go:130] > #   should be moved to the container's cgroup
	I1225 12:40:22.553433 1463142 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1225 12:40:22.553438 1463142 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1225 12:40:22.553444 1463142 command_runner.go:130] > runtime_type = "oci"
	I1225 12:40:22.553449 1463142 command_runner.go:130] > runtime_root = "/run/runc"
	I1225 12:40:22.553455 1463142 command_runner.go:130] > runtime_config_path = ""
	I1225 12:40:22.553459 1463142 command_runner.go:130] > monitor_path = ""
	I1225 12:40:22.553466 1463142 command_runner.go:130] > monitor_cgroup = ""
	I1225 12:40:22.553470 1463142 command_runner.go:130] > monitor_exec_cgroup = ""
	I1225 12:40:22.553478 1463142 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1225 12:40:22.553484 1463142 command_runner.go:130] > # running containers
	I1225 12:40:22.553488 1463142 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1225 12:40:22.553494 1463142 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1225 12:40:22.553521 1463142 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1225 12:40:22.553529 1463142 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1225 12:40:22.553536 1463142 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1225 12:40:22.553543 1463142 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1225 12:40:22.553549 1463142 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1225 12:40:22.553554 1463142 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1225 12:40:22.553561 1463142 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1225 12:40:22.553567 1463142 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1225 12:40:22.553576 1463142 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1225 12:40:22.553581 1463142 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1225 12:40:22.553589 1463142 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1225 12:40:22.553599 1463142 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1225 12:40:22.553608 1463142 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1225 12:40:22.553616 1463142 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1225 12:40:22.553629 1463142 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1225 12:40:22.553639 1463142 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1225 12:40:22.553647 1463142 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1225 12:40:22.553656 1463142 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1225 12:40:22.553663 1463142 command_runner.go:130] > # Example:
	I1225 12:40:22.553668 1463142 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1225 12:40:22.553675 1463142 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1225 12:40:22.553680 1463142 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1225 12:40:22.553687 1463142 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1225 12:40:22.553694 1463142 command_runner.go:130] > # cpuset = 0
	I1225 12:40:22.553698 1463142 command_runner.go:130] > # cpushares = "0-1"
	I1225 12:40:22.553704 1463142 command_runner.go:130] > # Where:
	I1225 12:40:22.553709 1463142 command_runner.go:130] > # The workload name is workload-type.
	I1225 12:40:22.553718 1463142 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1225 12:40:22.553730 1463142 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1225 12:40:22.553747 1463142 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1225 12:40:22.553763 1463142 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1225 12:40:22.553775 1463142 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1225 12:40:22.553783 1463142 command_runner.go:130] > # 
	I1225 12:40:22.553797 1463142 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1225 12:40:22.553806 1463142 command_runner.go:130] > #
	I1225 12:40:22.553816 1463142 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1225 12:40:22.553829 1463142 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1225 12:40:22.553843 1463142 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1225 12:40:22.553858 1463142 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1225 12:40:22.553870 1463142 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1225 12:40:22.553879 1463142 command_runner.go:130] > [crio.image]
	I1225 12:40:22.553892 1463142 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1225 12:40:22.553901 1463142 command_runner.go:130] > # default_transport = "docker://"
	I1225 12:40:22.553910 1463142 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1225 12:40:22.553918 1463142 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1225 12:40:22.553925 1463142 command_runner.go:130] > # global_auth_file = ""
	I1225 12:40:22.553930 1463142 command_runner.go:130] > # The image used to instantiate infra containers.
	I1225 12:40:22.553938 1463142 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:40:22.553943 1463142 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1225 12:40:22.553951 1463142 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1225 12:40:22.553959 1463142 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1225 12:40:22.553966 1463142 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:40:22.553971 1463142 command_runner.go:130] > # pause_image_auth_file = ""
	I1225 12:40:22.553979 1463142 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1225 12:40:22.553987 1463142 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1225 12:40:22.553996 1463142 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1225 12:40:22.554002 1463142 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1225 12:40:22.554009 1463142 command_runner.go:130] > # pause_command = "/pause"
	I1225 12:40:22.554015 1463142 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1225 12:40:22.554024 1463142 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1225 12:40:22.554030 1463142 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1225 12:40:22.554038 1463142 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1225 12:40:22.554044 1463142 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1225 12:40:22.554050 1463142 command_runner.go:130] > # signature_policy = ""
	I1225 12:40:22.554056 1463142 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1225 12:40:22.554064 1463142 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1225 12:40:22.554068 1463142 command_runner.go:130] > # changing them here.
	I1225 12:40:22.554075 1463142 command_runner.go:130] > # insecure_registries = [
	I1225 12:40:22.554078 1463142 command_runner.go:130] > # ]
	I1225 12:40:22.554086 1463142 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1225 12:40:22.554093 1463142 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1225 12:40:22.554098 1463142 command_runner.go:130] > # image_volumes = "mkdir"
	I1225 12:40:22.554104 1463142 command_runner.go:130] > # Temporary directory to use for storing big files
	I1225 12:40:22.554108 1463142 command_runner.go:130] > # big_files_temporary_dir = ""
	I1225 12:40:22.554117 1463142 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1225 12:40:22.554121 1463142 command_runner.go:130] > # CNI plugins.
	I1225 12:40:22.554127 1463142 command_runner.go:130] > [crio.network]
	I1225 12:40:22.554133 1463142 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1225 12:40:22.554141 1463142 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1225 12:40:22.554146 1463142 command_runner.go:130] > # cni_default_network = ""
	I1225 12:40:22.554154 1463142 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1225 12:40:22.554160 1463142 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1225 12:40:22.554168 1463142 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1225 12:40:22.554172 1463142 command_runner.go:130] > # plugin_dirs = [
	I1225 12:40:22.554178 1463142 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1225 12:40:22.554181 1463142 command_runner.go:130] > # ]
	I1225 12:40:22.554187 1463142 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1225 12:40:22.554191 1463142 command_runner.go:130] > [crio.metrics]
	I1225 12:40:22.554198 1463142 command_runner.go:130] > # Globally enable or disable metrics support.
	I1225 12:40:22.554202 1463142 command_runner.go:130] > enable_metrics = true
	I1225 12:40:22.554209 1463142 command_runner.go:130] > # Specify enabled metrics collectors.
	I1225 12:40:22.554214 1463142 command_runner.go:130] > # Per default all metrics are enabled.
	I1225 12:40:22.554222 1463142 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1225 12:40:22.554232 1463142 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1225 12:40:22.554240 1463142 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1225 12:40:22.554247 1463142 command_runner.go:130] > # metrics_collectors = [
	I1225 12:40:22.554251 1463142 command_runner.go:130] > # 	"operations",
	I1225 12:40:22.554258 1463142 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1225 12:40:22.554263 1463142 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1225 12:40:22.554269 1463142 command_runner.go:130] > # 	"operations_errors",
	I1225 12:40:22.554273 1463142 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1225 12:40:22.554278 1463142 command_runner.go:130] > # 	"image_pulls_by_name",
	I1225 12:40:22.554284 1463142 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1225 12:40:22.554289 1463142 command_runner.go:130] > # 	"image_pulls_failures",
	I1225 12:40:22.554295 1463142 command_runner.go:130] > # 	"image_pulls_successes",
	I1225 12:40:22.554300 1463142 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1225 12:40:22.554306 1463142 command_runner.go:130] > # 	"image_layer_reuse",
	I1225 12:40:22.554310 1463142 command_runner.go:130] > # 	"containers_oom_total",
	I1225 12:40:22.554316 1463142 command_runner.go:130] > # 	"containers_oom",
	I1225 12:40:22.554321 1463142 command_runner.go:130] > # 	"processes_defunct",
	I1225 12:40:22.554327 1463142 command_runner.go:130] > # 	"operations_total",
	I1225 12:40:22.554332 1463142 command_runner.go:130] > # 	"operations_latency_seconds",
	I1225 12:40:22.554339 1463142 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1225 12:40:22.554343 1463142 command_runner.go:130] > # 	"operations_errors_total",
	I1225 12:40:22.554351 1463142 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1225 12:40:22.554357 1463142 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1225 12:40:22.554362 1463142 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1225 12:40:22.554369 1463142 command_runner.go:130] > # 	"image_pulls_success_total",
	I1225 12:40:22.554373 1463142 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1225 12:40:22.554380 1463142 command_runner.go:130] > # 	"containers_oom_count_total",
	I1225 12:40:22.554384 1463142 command_runner.go:130] > # ]
	I1225 12:40:22.554391 1463142 command_runner.go:130] > # The port on which the metrics server will listen.
	I1225 12:40:22.554398 1463142 command_runner.go:130] > # metrics_port = 9090
	I1225 12:40:22.554403 1463142 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1225 12:40:22.554410 1463142 command_runner.go:130] > # metrics_socket = ""
	I1225 12:40:22.554415 1463142 command_runner.go:130] > # The certificate for the secure metrics server.
	I1225 12:40:22.554423 1463142 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1225 12:40:22.554443 1463142 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1225 12:40:22.554453 1463142 command_runner.go:130] > # certificate on any modification event.
	I1225 12:40:22.554461 1463142 command_runner.go:130] > # metrics_cert = ""
	I1225 12:40:22.554470 1463142 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1225 12:40:22.554476 1463142 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1225 12:40:22.554482 1463142 command_runner.go:130] > # metrics_key = ""
	I1225 12:40:22.554488 1463142 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1225 12:40:22.554494 1463142 command_runner.go:130] > [crio.tracing]
	I1225 12:40:22.554500 1463142 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1225 12:40:22.554506 1463142 command_runner.go:130] > # enable_tracing = false
	I1225 12:40:22.554512 1463142 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1225 12:40:22.554518 1463142 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1225 12:40:22.554524 1463142 command_runner.go:130] > # Number of samples to collect per million spans.
	I1225 12:40:22.554530 1463142 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1225 12:40:22.554536 1463142 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1225 12:40:22.554542 1463142 command_runner.go:130] > [crio.stats]
	I1225 12:40:22.554549 1463142 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1225 12:40:22.554556 1463142 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1225 12:40:22.554563 1463142 command_runner.go:130] > # stats_collection_period = 0
	I1225 12:40:22.554601 1463142 command_runner.go:130] ! time="2023-12-25 12:40:22.534466145Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1225 12:40:22.554615 1463142 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1225 12:40:22.554678 1463142 cni.go:84] Creating CNI manager for ""
	I1225 12:40:22.554691 1463142 cni.go:136] 2 nodes found, recommending kindnet
	I1225 12:40:22.554712 1463142 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 12:40:22.554737 1463142 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-544936 NodeName:multinode-544936-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 12:40:22.554881 1463142 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-544936-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 12:40:22.554937 1463142 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-544936-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 12:40:22.554995 1463142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 12:40:22.564802 1463142 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I1225 12:40:22.564868 1463142 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I1225 12:40:22.564934 1463142 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I1225 12:40:22.574421 1463142 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I1225 12:40:22.574469 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I1225 12:40:22.574540 1463142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I1225 12:40:22.574546 1463142 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I1225 12:40:22.574582 1463142 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I1225 12:40:22.582097 1463142 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1225 12:40:22.582166 1463142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1225 12:40:22.582196 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I1225 12:40:23.495618 1463142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:40:23.510601 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I1225 12:40:23.510736 1463142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I1225 12:40:23.514975 1463142 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1225 12:40:23.515064 1463142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1225 12:40:23.515101 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I1225 12:40:26.255480 1463142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1225 12:40:26.255566 1463142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1225 12:40:26.260507 1463142 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1225 12:40:26.260848 1463142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1225 12:40:26.260887 1463142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I1225 12:40:26.506293 1463142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1225 12:40:26.516215 1463142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1225 12:40:26.533534 1463142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 12:40:26.550322 1463142 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I1225 12:40:26.554416 1463142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 12:40:26.566705 1463142 host.go:66] Checking if "multinode-544936" exists ...
	I1225 12:40:26.567016 1463142 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:40:26.567100 1463142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:40:26.567151 1463142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:40:26.583702 1463142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33959
	I1225 12:40:26.584259 1463142 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:40:26.584808 1463142 main.go:141] libmachine: Using API Version  1
	I1225 12:40:26.584833 1463142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:40:26.585261 1463142 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:40:26.585468 1463142 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:40:26.585651 1463142 start.go:304] JoinCluster: &{Name:multinode-544936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:40:26.585748 1463142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1225 12:40:26.585767 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:40:26.589270 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:40:26.589749 1463142 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:40:26.589794 1463142 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:40:26.589948 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:40:26.590191 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:40:26.590353 1463142 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:40:26.590521 1463142 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:40:26.785066 1463142 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 87whto.ah3w8bwghntxfphi --discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 12:40:26.785138 1463142 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.205 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1225 12:40:26.785180 1463142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 87whto.ah3w8bwghntxfphi --discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-544936-m02"
	I1225 12:40:26.831603 1463142 command_runner.go:130] > [preflight] Running pre-flight checks
	I1225 12:40:26.987254 1463142 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1225 12:40:26.987295 1463142 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1225 12:40:27.030991 1463142 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 12:40:27.031027 1463142 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 12:40:27.031037 1463142 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1225 12:40:27.146538 1463142 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1225 12:40:29.662239 1463142 command_runner.go:130] > This node has joined the cluster:
	I1225 12:40:29.662267 1463142 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1225 12:40:29.662274 1463142 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1225 12:40:29.662280 1463142 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1225 12:40:29.663825 1463142 command_runner.go:130] ! W1225 12:40:26.823723     818 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1225 12:40:29.663883 1463142 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 12:40:29.663918 1463142 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 87whto.ah3w8bwghntxfphi --discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-544936-m02": (2.87872091s)
	I1225 12:40:29.663938 1463142 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1225 12:40:29.804143 1463142 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1225 12:40:29.955567 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da minikube.k8s.io/name=multinode-544936 minikube.k8s.io/updated_at=2023_12_25T12_40_29_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:40:30.066487 1463142 command_runner.go:130] > node/multinode-544936-m02 labeled
	I1225 12:40:30.068898 1463142 start.go:306] JoinCluster complete in 3.48324218s
	I1225 12:40:30.068929 1463142 cni.go:84] Creating CNI manager for ""
	I1225 12:40:30.068944 1463142 cni.go:136] 2 nodes found, recommending kindnet
	I1225 12:40:30.068996 1463142 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1225 12:40:30.075756 1463142 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1225 12:40:30.075791 1463142 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1225 12:40:30.075803 1463142 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1225 12:40:30.075812 1463142 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1225 12:40:30.075821 1463142 command_runner.go:130] > Access: 2023-12-25 12:39:00.887097634 +0000
	I1225 12:40:30.075829 1463142 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I1225 12:40:30.075837 1463142 command_runner.go:130] > Change: 2023-12-25 12:38:59.067097634 +0000
	I1225 12:40:30.075847 1463142 command_runner.go:130] >  Birth: -
	I1225 12:40:30.076091 1463142 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1225 12:40:30.076109 1463142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1225 12:40:30.095517 1463142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1225 12:40:30.395886 1463142 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1225 12:40:30.400512 1463142 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1225 12:40:30.403266 1463142 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1225 12:40:30.416120 1463142 command_runner.go:130] > daemonset.apps/kindnet configured
	I1225 12:40:30.419210 1463142 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:40:30.419458 1463142 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:40:30.419891 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1225 12:40:30.419905 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:30.419913 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:30.419925 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:30.422462 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:30.422485 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:30.422496 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:30.422504 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:30.422513 1463142 round_trippers.go:580]     Content-Length: 291
	I1225 12:40:30.422523 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:30 GMT
	I1225 12:40:30.422531 1463142 round_trippers.go:580]     Audit-Id: d40c3cff-8a6e-4167-923b-d1cf81da3833
	I1225 12:40:30.422541 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:30.422552 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:30.422586 1463142 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1deabb96-9bfd-47c0-8cbc-978c4199f86b","resourceVersion":"437","creationTimestamp":"2023-12-25T12:39:31Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1225 12:40:30.422789 1463142 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-544936" context rescaled to 1 replicas
	I1225 12:40:30.422829 1463142 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.205 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1225 12:40:30.424903 1463142 out.go:177] * Verifying Kubernetes components...
	I1225 12:40:30.426260 1463142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:40:30.442935 1463142 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:40:30.443160 1463142 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:40:30.443512 1463142 node_ready.go:35] waiting up to 6m0s for node "multinode-544936-m02" to be "Ready" ...
	I1225 12:40:30.443601 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:30.443609 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:30.443617 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:30.443623 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:30.446540 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:30.446561 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:30.446568 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:30.446573 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:30.446578 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:30.446583 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:30.446588 1463142 round_trippers.go:580]     Content-Length: 4083
	I1225 12:40:30.446596 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:30 GMT
	I1225 12:40:30.446602 1463142 round_trippers.go:580]     Audit-Id: d5cd2670-5e8b-4ecd-bb37-aab455ebc7c8
	I1225 12:40:30.447285 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"495","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I1225 12:40:30.943976 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:30.944012 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:30.944024 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:30.944034 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:30.949370 1463142 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1225 12:40:30.949397 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:30.949407 1463142 round_trippers.go:580]     Audit-Id: 32d12c0a-0282-4951-aa0b-602c76832331
	I1225 12:40:30.949416 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:30.949425 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:30.949434 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:30.949446 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:30.949458 1463142 round_trippers.go:580]     Content-Length: 4083
	I1225 12:40:30.949467 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:30 GMT
	I1225 12:40:30.949586 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"495","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I1225 12:40:31.443861 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:31.443891 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:31.443900 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:31.443912 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:31.448060 1463142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:40:31.448086 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:31.448096 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:31.448104 1463142 round_trippers.go:580]     Content-Length: 4083
	I1225 12:40:31.448111 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:31 GMT
	I1225 12:40:31.448119 1463142 round_trippers.go:580]     Audit-Id: 6bad2087-11ec-4f2f-8b89-767576952a1a
	I1225 12:40:31.448127 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:31.448135 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:31.448147 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:31.448253 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"495","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I1225 12:40:31.944233 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:31.944265 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:31.944279 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:31.944289 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:31.947595 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:40:31.947644 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:31.947657 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:31.947667 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:31.947677 1463142 round_trippers.go:580]     Content-Length: 4083
	I1225 12:40:31.947687 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:31 GMT
	I1225 12:40:31.947709 1463142 round_trippers.go:580]     Audit-Id: b57110f0-8a89-4b5f-819c-e480d9229254
	I1225 12:40:31.947720 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:31.947728 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:31.947864 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"495","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I1225 12:40:32.444369 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:32.444402 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:32.444413 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:32.444423 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:32.449114 1463142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:40:32.449142 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:32.449152 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:32.449169 1463142 round_trippers.go:580]     Content-Length: 4083
	I1225 12:40:32.449177 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:32 GMT
	I1225 12:40:32.449185 1463142 round_trippers.go:580]     Audit-Id: edd6a12f-5e6b-49b3-a7b4-9912d729d03a
	I1225 12:40:32.449193 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:32.449202 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:32.449211 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:32.449273 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"495","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I1225 12:40:32.449552 1463142 node_ready.go:58] node "multinode-544936-m02" has status "Ready":"False"
	I1225 12:40:32.944587 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:32.944706 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:32.944725 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:32.944733 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:32.948694 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:40:32.948727 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:32.948739 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:32 GMT
	I1225 12:40:32.948749 1463142 round_trippers.go:580]     Audit-Id: 935cf917-ccb3-403f-80c3-c7e421aa1986
	I1225 12:40:32.948773 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:32.948781 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:32.948794 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:32.948802 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:32.948817 1463142 round_trippers.go:580]     Content-Length: 4083
	I1225 12:40:32.949109 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"495","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I1225 12:40:33.444623 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:33.444653 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:33.444665 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:33.444674 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:33.461831 1463142 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1225 12:40:33.461864 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:33.461875 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:33.461885 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:33 GMT
	I1225 12:40:33.461893 1463142 round_trippers.go:580]     Audit-Id: 25e7517c-f3fc-4288-8da4-de636389e31f
	I1225 12:40:33.461902 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:33.461912 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:33.461920 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:33.462125 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"500","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I1225 12:40:33.943973 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:33.944010 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:33.944029 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:33.944037 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:33.949204 1463142 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1225 12:40:33.949238 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:33.949249 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:33.949257 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:33 GMT
	I1225 12:40:33.949265 1463142 round_trippers.go:580]     Audit-Id: 49d47dd4-ee48-40f4-96ab-d76a99643855
	I1225 12:40:33.949273 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:33.949281 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:33.949288 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:33.950046 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"500","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I1225 12:40:34.443792 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:34.443822 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:34.443831 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:34.443837 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:34.448779 1463142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:40:34.448811 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:34.448823 1463142 round_trippers.go:580]     Audit-Id: cc622783-bf91-46ef-ac19-25bfece598d0
	I1225 12:40:34.448832 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:34.448839 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:34.448848 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:34.448856 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:34.448866 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:34 GMT
	I1225 12:40:34.449241 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"500","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I1225 12:40:34.449613 1463142 node_ready.go:58] node "multinode-544936-m02" has status "Ready":"False"
	I1225 12:40:34.944574 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:34.944598 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:34.944613 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:34.944619 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:34.947727 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:40:34.947751 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:34.947758 1463142 round_trippers.go:580]     Audit-Id: d85abbb6-92cb-43c6-bc7d-938dce29b0ae
	I1225 12:40:34.947764 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:34.947769 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:34.947774 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:34.947779 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:34.947784 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:34 GMT
	I1225 12:40:34.948082 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"500","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I1225 12:40:35.443760 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:35.443797 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:35.443808 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:35.443816 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:35.448817 1463142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:40:35.448853 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:35.448865 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:35.448874 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:35.448881 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:35.448889 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:35.448897 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:35 GMT
	I1225 12:40:35.448905 1463142 round_trippers.go:580]     Audit-Id: f52b3445-62c4-4d5d-a7fe-a9d109d5c8a1
	I1225 12:40:35.449113 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"500","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I1225 12:40:35.943745 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:35.943772 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:35.943781 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:35.943787 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:35.946487 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:35.946508 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:35.946515 1463142 round_trippers.go:580]     Audit-Id: e4fd68df-a1e6-4468-8e13-31a41fea92d4
	I1225 12:40:35.946520 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:35.946525 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:35.946532 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:35.946537 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:35.946542 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:35 GMT
	I1225 12:40:35.946732 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"500","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I1225 12:40:36.444479 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:36.444508 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:36.444517 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:36.444523 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:36.447659 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:40:36.447681 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:36.447688 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:36.447694 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:36.447702 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:36.447709 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:36 GMT
	I1225 12:40:36.447714 1463142 round_trippers.go:580]     Audit-Id: 4e495ea6-53db-4df2-bd5e-cd2709d35ca4
	I1225 12:40:36.447719 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:36.448282 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"500","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I1225 12:40:36.943697 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:36.943722 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:36.943731 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:36.943737 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:36.946459 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:36.946483 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:36.946490 1463142 round_trippers.go:580]     Audit-Id: 0f1af891-cdaa-4fc2-a39d-b7fe98724b72
	I1225 12:40:36.946496 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:36.946501 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:36.946506 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:36.946511 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:36.946516 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:36 GMT
	I1225 12:40:36.946941 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"500","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I1225 12:40:36.947243 1463142 node_ready.go:58] node "multinode-544936-m02" has status "Ready":"False"
	I1225 12:40:37.444718 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:37.444743 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:37.444752 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:37.444762 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:37.447293 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:37.447328 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:37.447335 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:37.447345 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:37.447352 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:37.447359 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:37.447366 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:37 GMT
	I1225 12:40:37.447373 1463142 round_trippers.go:580]     Audit-Id: 6d45a275-6085-4bf1-aa97-721912c3875f
	I1225 12:40:37.447837 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"500","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I1225 12:40:37.944586 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:37.944616 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:37.944625 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:37.944632 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:37.947352 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:37.947384 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:37.947393 1463142 round_trippers.go:580]     Audit-Id: ada88a98-e1d6-4b5a-86b2-041780d52d91
	I1225 12:40:37.947410 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:37.947418 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:37.947426 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:37.947434 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:37.947441 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:37 GMT
	I1225 12:40:37.947719 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"518","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3254 chars]
	I1225 12:40:37.948078 1463142 node_ready.go:49] node "multinode-544936-m02" has status "Ready":"True"
	I1225 12:40:37.948105 1463142 node_ready.go:38] duration metric: took 7.504567137s waiting for node "multinode-544936-m02" to be "Ready" ...
	I1225 12:40:37.948115 1463142 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:40:37.948183 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:40:37.948191 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:37.948198 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:37.948204 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:37.951727 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:40:37.951748 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:37.951757 1463142 round_trippers.go:580]     Audit-Id: 06fd5d12-9e46-4417-a57b-8d7e313e2eca
	I1225 12:40:37.951766 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:37.951783 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:37.951788 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:37.951796 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:37.951801 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:37 GMT
	I1225 12:40:37.953745 1463142 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"518"},"items":[{"metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"433","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67332 chars]
	I1225 12:40:37.956471 1463142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:37.956564 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:40:37.956573 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:37.956581 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:37.956587 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:37.959100 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:37.959119 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:37.959130 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:37.959136 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:37.959141 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:37 GMT
	I1225 12:40:37.959150 1463142 round_trippers.go:580]     Audit-Id: 5493119e-e6fa-41f2-b0e5-33deccf67c1c
	I1225 12:40:37.959159 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:37.959168 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:37.959417 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"433","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1225 12:40:37.960000 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:40:37.960016 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:37.960024 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:37.960031 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:37.962453 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:37.962472 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:37.962481 1463142 round_trippers.go:580]     Audit-Id: f28fb4a1-1f0d-4d6b-8df0-410e12ceefb0
	I1225 12:40:37.962489 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:37.962499 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:37.962515 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:37.962526 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:37.962536 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:37 GMT
	I1225 12:40:37.962783 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:40:37.963156 1463142 pod_ready.go:92] pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace has status "Ready":"True"
	I1225 12:40:37.963178 1463142 pod_ready.go:81] duration metric: took 6.679753ms waiting for pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:37.963188 1463142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:37.963247 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:40:37.963255 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:37.963262 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:37.963268 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:37.965352 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:37.965369 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:37.965378 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:37.965386 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:37.965393 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:37 GMT
	I1225 12:40:37.965401 1463142 round_trippers.go:580]     Audit-Id: ce5db247-1fa8-4b1f-b1cb-b00cbc947091
	I1225 12:40:37.965409 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:37.965430 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:37.965625 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"382","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1225 12:40:37.966022 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:40:37.966034 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:37.966041 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:37.966047 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:37.968768 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:37.968785 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:37.968794 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:37.968800 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:37.968806 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:37.968811 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:37 GMT
	I1225 12:40:37.968816 1463142 round_trippers.go:580]     Audit-Id: f444ba02-9ac2-43f7-a735-58a70615f4b4
	I1225 12:40:37.968822 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:37.969615 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:40:37.969912 1463142 pod_ready.go:92] pod "etcd-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:40:37.969925 1463142 pod_ready.go:81] duration metric: took 6.730507ms waiting for pod "etcd-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:37.969938 1463142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:37.970003 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-544936
	I1225 12:40:37.970007 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:37.970014 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:37.970020 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:37.972139 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:37.972160 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:37.972168 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:37 GMT
	I1225 12:40:37.972176 1463142 round_trippers.go:580]     Audit-Id: 0c1e3537-2e2f-49e8-a6d4-d78fd065d379
	I1225 12:40:37.972192 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:37.972200 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:37.972216 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:37.972223 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:37.972397 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-544936","namespace":"kube-system","uid":"d0fda9c8-27cf-4ecc-b379-39745cb7ec19","resourceVersion":"300","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.21:8443","kubernetes.io/config.hash":"b7cd9addac4657510db86c61386c4e6f","kubernetes.io/config.mirror":"b7cd9addac4657510db86c61386c4e6f","kubernetes.io/config.seen":"2023-12-25T12:39:31.216607492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1225 12:40:37.972894 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:40:37.972917 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:37.972927 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:37.972935 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:37.975582 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:37.975605 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:37.975615 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:37.975624 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:37 GMT
	I1225 12:40:37.975632 1463142 round_trippers.go:580]     Audit-Id: 6d8966f4-aec1-4039-8f90-4fb98da7bc01
	I1225 12:40:37.975641 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:37.975649 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:37.975665 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:37.976474 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:40:37.976919 1463142 pod_ready.go:92] pod "kube-apiserver-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:40:37.976943 1463142 pod_ready.go:81] duration metric: took 6.998667ms waiting for pod "kube-apiserver-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:37.976953 1463142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:37.977016 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-544936
	I1225 12:40:37.977031 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:37.977038 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:37.977044 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:37.979451 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:37.979471 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:37.979482 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:37.979488 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:37.979494 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:37.979499 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:37.979503 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:37 GMT
	I1225 12:40:37.979508 1463142 round_trippers.go:580]     Audit-Id: ed88964f-e693-49d4-826f-16c9d811c8d6
	I1225 12:40:37.979726 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-544936","namespace":"kube-system","uid":"e8837ba4-e0a0-4bec-a702-df5e7e9ce1c0","resourceVersion":"296","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dcbd1114ea0bb0064cc87c1b2d706f29","kubernetes.io/config.mirror":"dcbd1114ea0bb0064cc87c1b2d706f29","kubernetes.io/config.seen":"2023-12-25T12:39:31.216608577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1225 12:40:37.980198 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:40:37.980213 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:37.980220 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:37.980226 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:37.982589 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:37.982613 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:37.982623 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:37 GMT
	I1225 12:40:37.982631 1463142 round_trippers.go:580]     Audit-Id: 03caaf11-4f33-4434-bfeb-0ee614cae4fd
	I1225 12:40:37.982646 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:37.982654 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:37.982663 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:37.982674 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:37.982825 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:40:37.983240 1463142 pod_ready.go:92] pod "kube-controller-manager-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:40:37.983268 1463142 pod_ready.go:81] duration metric: took 6.307304ms waiting for pod "kube-controller-manager-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:37.983281 1463142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7z5x6" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:38.144636 1463142 request.go:629] Waited for 161.249947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z5x6
	I1225 12:40:38.144725 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z5x6
	I1225 12:40:38.144733 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:38.144745 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:38.144764 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:38.147465 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:38.147492 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:38.147503 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:38.147512 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:38 GMT
	I1225 12:40:38.147520 1463142 round_trippers.go:580]     Audit-Id: 4d68792f-1a61-48ad-84e6-acf5b8e59acf
	I1225 12:40:38.147532 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:38.147544 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:38.147555 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:38.148105 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7z5x6","generateName":"kube-proxy-","namespace":"kube-system","uid":"304c848e-4ecf-433d-a17d-b1b33784ae08","resourceVersion":"507","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1225 12:40:38.344997 1463142 request.go:629] Waited for 196.450605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:38.345074 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:40:38.345082 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:38.345090 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:38.345096 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:38.348691 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:40:38.348723 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:38.348734 1463142 round_trippers.go:580]     Audit-Id: fa0bef13-3d82-4197-95c1-1e3ca42f1154
	I1225 12:40:38.348748 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:38.348754 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:38.348760 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:38.348765 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:38.348774 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:38 GMT
	I1225 12:40:38.348883 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"519","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3134 chars]
	I1225 12:40:38.349178 1463142 pod_ready.go:92] pod "kube-proxy-7z5x6" in "kube-system" namespace has status "Ready":"True"
	I1225 12:40:38.349199 1463142 pod_ready.go:81] duration metric: took 365.912091ms waiting for pod "kube-proxy-7z5x6" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:38.349210 1463142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4jc7" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:38.545334 1463142 request.go:629] Waited for 196.025606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4jc7
	I1225 12:40:38.545441 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4jc7
	I1225 12:40:38.545448 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:38.545459 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:38.545468 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:38.548739 1463142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:40:38.548763 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:38.548771 1463142 round_trippers.go:580]     Audit-Id: 5c57e769-2457-4f4d-b263-8471c4cba460
	I1225 12:40:38.548776 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:38.548785 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:38.548791 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:38.548799 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:38.548811 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:38 GMT
	I1225 12:40:38.548958 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k4jc7","generateName":"kube-proxy-","namespace":"kube-system","uid":"14699a0d-601b-4bc3-9584-7ac67822a926","resourceVersion":"405","creationTimestamp":"2023-12-25T12:39:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1225 12:40:38.744763 1463142 request.go:629] Waited for 195.332884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:40:38.744832 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:40:38.744837 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:38.744845 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:38.744853 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:38.747674 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:38.747699 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:38.747711 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:38.747720 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:38 GMT
	I1225 12:40:38.747728 1463142 round_trippers.go:580]     Audit-Id: bc9c081a-aa6a-48be-8016-e4023854a8f4
	I1225 12:40:38.747735 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:38.747743 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:38.747752 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:38.747912 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:40:38.748451 1463142 pod_ready.go:92] pod "kube-proxy-k4jc7" in "kube-system" namespace has status "Ready":"True"
	I1225 12:40:38.748490 1463142 pod_ready.go:81] duration metric: took 399.261035ms waiting for pod "kube-proxy-k4jc7" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:38.748509 1463142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:38.945382 1463142 request.go:629] Waited for 196.766529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-544936
	I1225 12:40:38.945458 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-544936
	I1225 12:40:38.945463 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:38.945472 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:38.945480 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:38.948478 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:38.948504 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:38.948516 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:38.948525 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:38.948532 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:38.948539 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:38.948547 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:38 GMT
	I1225 12:40:38.948555 1463142 round_trippers.go:580]     Audit-Id: 16675fb3-4a31-42d9-a002-2ea6b12f817a
	I1225 12:40:38.948689 1463142 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-544936","namespace":"kube-system","uid":"e8027489-26d3-44c3-aeea-286e6689e75e","resourceVersion":"299","creationTimestamp":"2023-12-25T12:39:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0d8721061e771e9dc39fa5394fc12b4b","kubernetes.io/config.mirror":"0d8721061e771e9dc39fa5394fc12b4b","kubernetes.io/config.seen":"2023-12-25T12:39:22.819404471Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1225 12:40:39.145579 1463142 request.go:629] Waited for 196.433773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:40:39.145663 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:40:39.145687 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:39.145696 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:39.145703 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:39.148598 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:39.148636 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:39.148647 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:39.148656 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:39 GMT
	I1225 12:40:39.148665 1463142 round_trippers.go:580]     Audit-Id: b5bc4f84-62b3-4d3c-9639-31c8d41244d9
	I1225 12:40:39.148673 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:39.148682 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:39.148690 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:39.148869 1463142 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1225 12:40:39.149444 1463142 pod_ready.go:92] pod "kube-scheduler-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:40:39.149471 1463142 pod_ready.go:81] duration metric: took 400.952312ms waiting for pod "kube-scheduler-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:40:39.149486 1463142 pod_ready.go:38] duration metric: took 1.201358024s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:40:39.149508 1463142 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 12:40:39.149569 1463142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:40:39.163389 1463142 system_svc.go:56] duration metric: took 13.866463ms WaitForService to wait for kubelet.
	I1225 12:40:39.163428 1463142 kubeadm.go:581] duration metric: took 8.740570326s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 12:40:39.163466 1463142 node_conditions.go:102] verifying NodePressure condition ...
	I1225 12:40:39.344899 1463142 request.go:629] Waited for 181.343099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes
	I1225 12:40:39.344978 1463142 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes
	I1225 12:40:39.344987 1463142 round_trippers.go:469] Request Headers:
	I1225 12:40:39.344998 1463142 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:40:39.345011 1463142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:40:39.347866 1463142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:40:39.347888 1463142 round_trippers.go:577] Response Headers:
	I1225 12:40:39.347898 1463142 round_trippers.go:580]     Audit-Id: c11b2aac-4446-44cf-95f3-651997266726
	I1225 12:40:39.347906 1463142 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:40:39.347912 1463142 round_trippers.go:580]     Content-Type: application/json
	I1225 12:40:39.347918 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:40:39.347926 1463142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:40:39.347933 1463142 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:40:39 GMT
	I1225 12:40:39.348220 1463142 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"519"},"items":[{"metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"416","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10077 chars]
	I1225 12:40:39.348696 1463142 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:40:39.348719 1463142 node_conditions.go:123] node cpu capacity is 2
	I1225 12:40:39.348731 1463142 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:40:39.348735 1463142 node_conditions.go:123] node cpu capacity is 2
	I1225 12:40:39.348740 1463142 node_conditions.go:105] duration metric: took 185.268261ms to run NodePressure ...
	I1225 12:40:39.348752 1463142 start.go:228] waiting for startup goroutines ...
	I1225 12:40:39.348790 1463142 start.go:242] writing updated cluster config ...
	I1225 12:40:39.349083 1463142 ssh_runner.go:195] Run: rm -f paused
	I1225 12:40:39.404770 1463142 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 12:40:39.407383 1463142 out.go:177] * Done! kubectl is now configured to use "multinode-544936" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2023-12-25 12:38:59 UTC, ends at Mon 2023-12-25 12:40:46 UTC. --
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.102397046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703508046102382478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=cdd09874-8106-4fc0-9096-8bd8f82b4f47 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.102867990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e3d32ec5-0df5-4e6c-af0b-3d84eada1217 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.102951985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e3d32ec5-0df5-4e6c-af0b-3d84eada1217 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.103217402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc13b1c9fbcfbe231adb68ecfb413e13d78c964edd3dcde3573b1652a80c2333,PodSandboxId:a9548f90d415fb8b72fde40a87ce27d037fa75d2fff9fba56eae7e78d1106907,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1703508042412320204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-qn48b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91cf6ac2-2bc3-4049-aaed-7863759e58da,},Annotations:map[string]string{io.kubernetes.container.hash: 2c50b09b,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fada49ae52efba23bef02ec0bced15c7de288e1192b70ebd0aa9b33c348c45ff,PodSandboxId:9ef39973350c964cdd3264248004ed201c87275c9fc8f568520d61aebe3c5191,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703507990895798289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mg2zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4e21f4-8e73-4b81-a080-c42b6980ee3b,},Annotations:map[string]string{io.kubernetes.container.hash: dc0843c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218fdd289f72c4255c27ad9b9dd658722ab17fc773e2c475935612d9bb3601f8,PodSandboxId:8431a950b0e4baa68f019398099a161b66934305fef3143b0c6dde70f2b55ef0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703507990786822297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 897346ba-f39d-4771-913e-535bff9ca6b7,},Annotations:map[string]string{io.kubernetes.container.hash: 721f4eb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f772e253692ff4dba76970dea30a34d0dbc655d446c39e305ed2aba3776941,PodSandboxId:03ca410ad7f7177db23153a5bffc28cc07845edbe31fadf3f8abbca35d6a68f6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1703507987911076979,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2hjhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8cfe7daa-3fc7-485a-8794-117466297c5a,},Annotations:map[string]string{io.kubernetes.container.hash: 44ff0fe1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4acb9ee46cadc0cb527999afbf105652f43876276822296843a0e176fa44a4,PodSandboxId:b3e3aa7b924550abe322139a28bc497a23f7f80dd011246656abc7e884dfb872,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703507986030464363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4jc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14699a0d-601b-4bc3-9584-7ac678
22a926,},Annotations:map[string]string{io.kubernetes.container.hash: c415925e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854ce6c702a5fd166cc78c7d45e349b11350323e46fc4a67c2291ed45ccdcbfb,PodSandboxId:60e29da4cd3bf153de7800a9e281c9a6145a1ae1dd80a3fe85df7a1425baa597,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703507964212503265,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8721061e771e9dc39fa5394fc12b4b,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024afdd9c9c922b622cfbc34f209a244e67590857ca61ac2f8b1328c47853be6,PodSandboxId:9d5c2926ea8e4b666250c9e9863f5206bd36ab43faa013c3901bb4e32ef5c4c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703507964251226160,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73040190d29da5f0e049ff80afdcbb96,},Annotations:map[string]string{io.kubernetes.container.h
ash: 7e47c687,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56252699573fb5c34b37211ca1a9ececabb95cc435645ed96571c2488913e82e,PodSandboxId:c312daf5b50488b8b7dbaa2430cc7c0590689d30ff6d2bfa072c84e0cf63e5ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703507964064710248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7cd9addac4657510db86c61386c4e6f,},Annotations:map[string]string{io.kubernetes.container.hash: b4e5abb
2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6625c84f1fe304d7204b7e6888d4aeff3146b2831f6abeb55ff3f0b8437b5c,PodSandboxId:3b91dd03e24a73b24386907f985c147d41154f4f36af18a2dcdd81d84661aa92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703507963883367317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbd1114ea0bb0064cc87c1b2d706f29,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e3d32ec5-0df5-4e6c-af0b-3d84eada1217 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.143770208Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ead63ce0-f661-4852-8fdd-18de585950c3 name=/runtime.v1.RuntimeService/Version
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.143850107Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ead63ce0-f661-4852-8fdd-18de585950c3 name=/runtime.v1.RuntimeService/Version
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.145685375Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b8937c24-c739-4af3-8e03-2418796cecfb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.146066762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703508046146054059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b8937c24-c739-4af3-8e03-2418796cecfb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.146871818Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=176e32d1-36ed-4806-b82c-1421b8422c0b name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.146982248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=176e32d1-36ed-4806-b82c-1421b8422c0b name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.147218425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc13b1c9fbcfbe231adb68ecfb413e13d78c964edd3dcde3573b1652a80c2333,PodSandboxId:a9548f90d415fb8b72fde40a87ce27d037fa75d2fff9fba56eae7e78d1106907,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1703508042412320204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-qn48b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91cf6ac2-2bc3-4049-aaed-7863759e58da,},Annotations:map[string]string{io.kubernetes.container.hash: 2c50b09b,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fada49ae52efba23bef02ec0bced15c7de288e1192b70ebd0aa9b33c348c45ff,PodSandboxId:9ef39973350c964cdd3264248004ed201c87275c9fc8f568520d61aebe3c5191,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703507990895798289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mg2zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4e21f4-8e73-4b81-a080-c42b6980ee3b,},Annotations:map[string]string{io.kubernetes.container.hash: dc0843c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218fdd289f72c4255c27ad9b9dd658722ab17fc773e2c475935612d9bb3601f8,PodSandboxId:8431a950b0e4baa68f019398099a161b66934305fef3143b0c6dde70f2b55ef0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703507990786822297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 897346ba-f39d-4771-913e-535bff9ca6b7,},Annotations:map[string]string{io.kubernetes.container.hash: 721f4eb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f772e253692ff4dba76970dea30a34d0dbc655d446c39e305ed2aba3776941,PodSandboxId:03ca410ad7f7177db23153a5bffc28cc07845edbe31fadf3f8abbca35d6a68f6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1703507987911076979,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2hjhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8cfe7daa-3fc7-485a-8794-117466297c5a,},Annotations:map[string]string{io.kubernetes.container.hash: 44ff0fe1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4acb9ee46cadc0cb527999afbf105652f43876276822296843a0e176fa44a4,PodSandboxId:b3e3aa7b924550abe322139a28bc497a23f7f80dd011246656abc7e884dfb872,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703507986030464363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4jc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14699a0d-601b-4bc3-9584-7ac678
22a926,},Annotations:map[string]string{io.kubernetes.container.hash: c415925e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854ce6c702a5fd166cc78c7d45e349b11350323e46fc4a67c2291ed45ccdcbfb,PodSandboxId:60e29da4cd3bf153de7800a9e281c9a6145a1ae1dd80a3fe85df7a1425baa597,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703507964212503265,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8721061e771e9dc39fa5394fc12b4b,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024afdd9c9c922b622cfbc34f209a244e67590857ca61ac2f8b1328c47853be6,PodSandboxId:9d5c2926ea8e4b666250c9e9863f5206bd36ab43faa013c3901bb4e32ef5c4c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703507964251226160,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73040190d29da5f0e049ff80afdcbb96,},Annotations:map[string]string{io.kubernetes.container.h
ash: 7e47c687,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56252699573fb5c34b37211ca1a9ececabb95cc435645ed96571c2488913e82e,PodSandboxId:c312daf5b50488b8b7dbaa2430cc7c0590689d30ff6d2bfa072c84e0cf63e5ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703507964064710248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7cd9addac4657510db86c61386c4e6f,},Annotations:map[string]string{io.kubernetes.container.hash: b4e5abb
2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6625c84f1fe304d7204b7e6888d4aeff3146b2831f6abeb55ff3f0b8437b5c,PodSandboxId:3b91dd03e24a73b24386907f985c147d41154f4f36af18a2dcdd81d84661aa92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703507963883367317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbd1114ea0bb0064cc87c1b2d706f29,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=176e32d1-36ed-4806-b82c-1421b8422c0b name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.195518252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4cd2c8f0-c400-42fb-8b23-82e1261ca83c name=/runtime.v1.RuntimeService/Version
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.195580359Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4cd2c8f0-c400-42fb-8b23-82e1261ca83c name=/runtime.v1.RuntimeService/Version
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.197620377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6baf98c7-573a-4585-940d-d6ba04ba8628 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.198069916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703508046198054723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6baf98c7-573a-4585-940d-d6ba04ba8628 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.199199759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f4811b6d-fd2d-499e-8196-6e53e05d2e0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.199328292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f4811b6d-fd2d-499e-8196-6e53e05d2e0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.199527286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc13b1c9fbcfbe231adb68ecfb413e13d78c964edd3dcde3573b1652a80c2333,PodSandboxId:a9548f90d415fb8b72fde40a87ce27d037fa75d2fff9fba56eae7e78d1106907,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1703508042412320204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-qn48b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91cf6ac2-2bc3-4049-aaed-7863759e58da,},Annotations:map[string]string{io.kubernetes.container.hash: 2c50b09b,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fada49ae52efba23bef02ec0bced15c7de288e1192b70ebd0aa9b33c348c45ff,PodSandboxId:9ef39973350c964cdd3264248004ed201c87275c9fc8f568520d61aebe3c5191,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703507990895798289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mg2zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4e21f4-8e73-4b81-a080-c42b6980ee3b,},Annotations:map[string]string{io.kubernetes.container.hash: dc0843c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218fdd289f72c4255c27ad9b9dd658722ab17fc773e2c475935612d9bb3601f8,PodSandboxId:8431a950b0e4baa68f019398099a161b66934305fef3143b0c6dde70f2b55ef0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703507990786822297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 897346ba-f39d-4771-913e-535bff9ca6b7,},Annotations:map[string]string{io.kubernetes.container.hash: 721f4eb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f772e253692ff4dba76970dea30a34d0dbc655d446c39e305ed2aba3776941,PodSandboxId:03ca410ad7f7177db23153a5bffc28cc07845edbe31fadf3f8abbca35d6a68f6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1703507987911076979,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2hjhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8cfe7daa-3fc7-485a-8794-117466297c5a,},Annotations:map[string]string{io.kubernetes.container.hash: 44ff0fe1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4acb9ee46cadc0cb527999afbf105652f43876276822296843a0e176fa44a4,PodSandboxId:b3e3aa7b924550abe322139a28bc497a23f7f80dd011246656abc7e884dfb872,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703507986030464363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4jc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14699a0d-601b-4bc3-9584-7ac678
22a926,},Annotations:map[string]string{io.kubernetes.container.hash: c415925e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854ce6c702a5fd166cc78c7d45e349b11350323e46fc4a67c2291ed45ccdcbfb,PodSandboxId:60e29da4cd3bf153de7800a9e281c9a6145a1ae1dd80a3fe85df7a1425baa597,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703507964212503265,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8721061e771e9dc39fa5394fc12b4b,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024afdd9c9c922b622cfbc34f209a244e67590857ca61ac2f8b1328c47853be6,PodSandboxId:9d5c2926ea8e4b666250c9e9863f5206bd36ab43faa013c3901bb4e32ef5c4c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703507964251226160,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73040190d29da5f0e049ff80afdcbb96,},Annotations:map[string]string{io.kubernetes.container.h
ash: 7e47c687,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56252699573fb5c34b37211ca1a9ececabb95cc435645ed96571c2488913e82e,PodSandboxId:c312daf5b50488b8b7dbaa2430cc7c0590689d30ff6d2bfa072c84e0cf63e5ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703507964064710248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7cd9addac4657510db86c61386c4e6f,},Annotations:map[string]string{io.kubernetes.container.hash: b4e5abb
2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6625c84f1fe304d7204b7e6888d4aeff3146b2831f6abeb55ff3f0b8437b5c,PodSandboxId:3b91dd03e24a73b24386907f985c147d41154f4f36af18a2dcdd81d84661aa92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703507963883367317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbd1114ea0bb0064cc87c1b2d706f29,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f4811b6d-fd2d-499e-8196-6e53e05d2e0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.243639227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6f201470-a2c1-4ce0-a10e-dc12da3264c9 name=/runtime.v1.RuntimeService/Version
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.243725694Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6f201470-a2c1-4ce0-a10e-dc12da3264c9 name=/runtime.v1.RuntimeService/Version
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.245368285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4132dfa8-be6b-4e25-a1b7-f2af10ac3e9d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.245822794Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703508046245808692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4132dfa8-be6b-4e25-a1b7-f2af10ac3e9d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.246529153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b8c5a7e-18c0-4fbf-b1e1-c97b1a32d8c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.246616976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b8c5a7e-18c0-4fbf-b1e1-c97b1a32d8c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:40:46 multinode-544936 crio[715]: time="2023-12-25 12:40:46.246805581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc13b1c9fbcfbe231adb68ecfb413e13d78c964edd3dcde3573b1652a80c2333,PodSandboxId:a9548f90d415fb8b72fde40a87ce27d037fa75d2fff9fba56eae7e78d1106907,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1703508042412320204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-qn48b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91cf6ac2-2bc3-4049-aaed-7863759e58da,},Annotations:map[string]string{io.kubernetes.container.hash: 2c50b09b,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fada49ae52efba23bef02ec0bced15c7de288e1192b70ebd0aa9b33c348c45ff,PodSandboxId:9ef39973350c964cdd3264248004ed201c87275c9fc8f568520d61aebe3c5191,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703507990895798289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mg2zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4e21f4-8e73-4b81-a080-c42b6980ee3b,},Annotations:map[string]string{io.kubernetes.container.hash: dc0843c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218fdd289f72c4255c27ad9b9dd658722ab17fc773e2c475935612d9bb3601f8,PodSandboxId:8431a950b0e4baa68f019398099a161b66934305fef3143b0c6dde70f2b55ef0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703507990786822297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 897346ba-f39d-4771-913e-535bff9ca6b7,},Annotations:map[string]string{io.kubernetes.container.hash: 721f4eb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f772e253692ff4dba76970dea30a34d0dbc655d446c39e305ed2aba3776941,PodSandboxId:03ca410ad7f7177db23153a5bffc28cc07845edbe31fadf3f8abbca35d6a68f6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1703507987911076979,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2hjhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8cfe7daa-3fc7-485a-8794-117466297c5a,},Annotations:map[string]string{io.kubernetes.container.hash: 44ff0fe1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4acb9ee46cadc0cb527999afbf105652f43876276822296843a0e176fa44a4,PodSandboxId:b3e3aa7b924550abe322139a28bc497a23f7f80dd011246656abc7e884dfb872,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703507986030464363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4jc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14699a0d-601b-4bc3-9584-7ac678
22a926,},Annotations:map[string]string{io.kubernetes.container.hash: c415925e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854ce6c702a5fd166cc78c7d45e349b11350323e46fc4a67c2291ed45ccdcbfb,PodSandboxId:60e29da4cd3bf153de7800a9e281c9a6145a1ae1dd80a3fe85df7a1425baa597,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703507964212503265,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8721061e771e9dc39fa5394fc12b4b,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024afdd9c9c922b622cfbc34f209a244e67590857ca61ac2f8b1328c47853be6,PodSandboxId:9d5c2926ea8e4b666250c9e9863f5206bd36ab43faa013c3901bb4e32ef5c4c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703507964251226160,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73040190d29da5f0e049ff80afdcbb96,},Annotations:map[string]string{io.kubernetes.container.h
ash: 7e47c687,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56252699573fb5c34b37211ca1a9ececabb95cc435645ed96571c2488913e82e,PodSandboxId:c312daf5b50488b8b7dbaa2430cc7c0590689d30ff6d2bfa072c84e0cf63e5ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703507964064710248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7cd9addac4657510db86c61386c4e6f,},Annotations:map[string]string{io.kubernetes.container.hash: b4e5abb
2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6625c84f1fe304d7204b7e6888d4aeff3146b2831f6abeb55ff3f0b8437b5c,PodSandboxId:3b91dd03e24a73b24386907f985c147d41154f4f36af18a2dcdd81d84661aa92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703507963883367317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbd1114ea0bb0064cc87c1b2d706f29,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b8c5a7e-18c0-4fbf-b1e1-c97b1a32d8c3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fc13b1c9fbcfb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago        Running             busybox                   0                   a9548f90d415f       busybox-5bc68d56bd-qn48b
	fada49ae52efb       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      55 seconds ago       Running             coredns                   0                   9ef39973350c9       coredns-5dd5756b68-mg2zk
	218fdd289f72c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      55 seconds ago       Running             storage-provisioner       0                   8431a950b0e4b       storage-provisioner
	89f772e253692       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      58 seconds ago       Running             kindnet-cni               0                   03ca410ad7f71       kindnet-2hjhm
	2a4acb9ee46ca       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   b3e3aa7b92455       kube-proxy-k4jc7
	024afdd9c9c92       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   9d5c2926ea8e4       etcd-multinode-544936
	854ce6c702a5f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   60e29da4cd3bf       kube-scheduler-multinode-544936
	56252699573fb       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   c312daf5b5048       kube-apiserver-multinode-544936
	4b6625c84f1fe       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   3b91dd03e24a7       kube-controller-manager-multinode-544936
	
	
	==> coredns [fada49ae52efba23bef02ec0bced15c7de288e1192b70ebd0aa9b33c348c45ff] <==
	[INFO] 10.244.0.3:38821 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091077s
	[INFO] 10.244.1.2:38387 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159291s
	[INFO] 10.244.1.2:40197 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001508453s
	[INFO] 10.244.1.2:37456 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231299s
	[INFO] 10.244.1.2:34139 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010216s
	[INFO] 10.244.1.2:57635 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001452223s
	[INFO] 10.244.1.2:60437 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181701s
	[INFO] 10.244.1.2:39878 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106291s
	[INFO] 10.244.1.2:60560 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161285s
	[INFO] 10.244.0.3:36753 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000290756s
	[INFO] 10.244.0.3:35138 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116164s
	[INFO] 10.244.0.3:52064 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085433s
	[INFO] 10.244.0.3:59327 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007639s
	[INFO] 10.244.1.2:49421 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157344s
	[INFO] 10.244.1.2:38915 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189686s
	[INFO] 10.244.1.2:40329 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109631s
	[INFO] 10.244.1.2:54265 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119861s
	[INFO] 10.244.0.3:43841 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113801s
	[INFO] 10.244.0.3:56108 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132802s
	[INFO] 10.244.0.3:52319 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092036s
	[INFO] 10.244.0.3:54921 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018018s
	[INFO] 10.244.1.2:33759 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252007s
	[INFO] 10.244.1.2:56812 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097962s
	[INFO] 10.244.1.2:60304 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000191469s
	[INFO] 10.244.1.2:41333 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083175s
	
	
	==> describe nodes <==
	Name:               multinode-544936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-544936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=multinode-544936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_25T12_39_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 12:39:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-544936
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Dec 2023 12:40:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 12:39:49 +0000   Mon, 25 Dec 2023 12:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 12:39:49 +0000   Mon, 25 Dec 2023 12:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 12:39:49 +0000   Mon, 25 Dec 2023 12:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 12:39:49 +0000   Mon, 25 Dec 2023 12:39:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.21
	  Hostname:    multinode-544936
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c871b9a919d4357b32244d5f639b350
	  System UUID:                2c871b9a-919d-4357-b322-44d5f639b350
	  Boot ID:                    a48f3fad-0f19-4b3d-a9d6-5855614c98e3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-qn48b                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-mg2zk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     62s
	  kube-system                 etcd-multinode-544936                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         75s
	  kube-system                 kindnet-2hjhm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      63s
	  kube-system                 kube-apiserver-multinode-544936             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-multinode-544936    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-k4jc7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-scheduler-multinode-544936             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 60s                kube-proxy       
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)  kubelet          Node multinode-544936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)  kubelet          Node multinode-544936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x7 over 84s)  kubelet          Node multinode-544936 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 75s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s                kubelet          Node multinode-544936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s                kubelet          Node multinode-544936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s                kubelet          Node multinode-544936 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s                node-controller  Node multinode-544936 event: Registered Node multinode-544936 in Controller
	  Normal  NodeReady                57s                kubelet          Node multinode-544936 status is now: NodeReady
	
	
	Name:               multinode-544936-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-544936-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=multinode-544936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_25T12_40_29_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 12:40:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-544936-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Dec 2023 12:40:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 12:40:37 +0000   Mon, 25 Dec 2023 12:40:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 12:40:37 +0000   Mon, 25 Dec 2023 12:40:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 12:40:37 +0000   Mon, 25 Dec 2023 12:40:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 12:40:37 +0000   Mon, 25 Dec 2023 12:40:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    multinode-544936-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 66cce375dc5741e9bc94b73c36c44956
	  System UUID:                66cce375-dc57-41e9-bc94-b73c36c44956
	  Boot ID:                    73b7258c-7460-449c-b732-339b00feed1f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-z5f74    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-mjlfm               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17s
	  kube-system                 kube-proxy-7z5x6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  17s (x5 over 19s)  kubelet          Node multinode-544936-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x5 over 19s)  kubelet          Node multinode-544936-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x5 over 19s)  kubelet          Node multinode-544936-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                node-controller  Node multinode-544936-m02 event: Registered Node multinode-544936-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-544936-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec25 12:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068335] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.401699] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.291785] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140000] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Dec25 12:39] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.394104] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.111403] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.145445] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.102345] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.219574] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +9.600308] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[  +8.773227] systemd-fstab-generator[1256]: Ignoring "noauto" for root device
	[ +20.554490] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [024afdd9c9c922b622cfbc34f209a244e67590857ca61ac2f8b1328c47853be6] <==
	{"level":"info","ts":"2023-12-25T12:39:26.084627Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.21:2380"}
	{"level":"info","ts":"2023-12-25T12:39:26.084757Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.21:2380"}
	{"level":"info","ts":"2023-12-25T12:39:26.089755Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-25T12:39:26.089684Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"3c2bdad7569acae7","initial-advertise-peer-urls":["https://192.168.39.21:2380"],"listen-peer-urls":["https://192.168.39.21:2380"],"advertise-client-urls":["https://192.168.39.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.21:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-25T12:39:26.531425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-25T12:39:26.531549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-25T12:39:26.531579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 received MsgPreVoteResp from 3c2bdad7569acae7 at term 1"}
	{"level":"info","ts":"2023-12-25T12:39:26.531593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 became candidate at term 2"}
	{"level":"info","ts":"2023-12-25T12:39:26.531599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 received MsgVoteResp from 3c2bdad7569acae7 at term 2"}
	{"level":"info","ts":"2023-12-25T12:39:26.531608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 became leader at term 2"}
	{"level":"info","ts":"2023-12-25T12:39:26.531615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3c2bdad7569acae7 elected leader 3c2bdad7569acae7 at term 2"}
	{"level":"info","ts":"2023-12-25T12:39:26.533334Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-25T12:39:26.53469Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3c2bdad7569acae7","local-member-attributes":"{Name:multinode-544936 ClientURLs:[https://192.168.39.21:2379]}","request-path":"/0/members/3c2bdad7569acae7/attributes","cluster-id":"f019a0e2d3e7d785","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-25T12:39:26.53474Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-25T12:39:26.535648Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-25T12:39:26.53622Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-25T12:39:26.537229Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.21:2379"}
	{"level":"info","ts":"2023-12-25T12:39:26.539224Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f019a0e2d3e7d785","local-member-id":"3c2bdad7569acae7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-25T12:39:26.539323Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-25T12:39:26.539345Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-25T12:39:26.555179Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-25T12:39:26.555324Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-25T12:40:27.958987Z","caller":"traceutil/trace.go:171","msg":"trace[1069627044] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"115.284574ms","start":"2023-12-25T12:40:27.843673Z","end":"2023-12-25T12:40:27.958958Z","steps":["trace[1069627044] 'process raft request'  (duration: 109.28801ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T12:40:33.393035Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.802515ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2023-12-25T12:40:33.393187Z","caller":"traceutil/trace.go:171","msg":"trace[404819095] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:500; }","duration":"122.083979ms","start":"2023-12-25T12:40:33.271091Z","end":"2023-12-25T12:40:33.393175Z","steps":["trace[404819095] 'range keys from in-memory index tree'  (duration: 121.371253ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:40:46 up 1 min,  0 users,  load average: 0.50, 0.28, 0.11
	Linux multinode-544936 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [89f772e253692ff4dba76970dea30a34d0dbc655d446c39e305ed2aba3776941] <==
	I1225 12:39:48.758239       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1225 12:39:48.758408       1 main.go:107] hostIP = 192.168.39.21
	podIP = 192.168.39.21
	I1225 12:39:48.758732       1 main.go:116] setting mtu 1500 for CNI 
	I1225 12:39:48.758772       1 main.go:146] kindnetd IP family: "ipv4"
	I1225 12:39:48.758815       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1225 12:39:49.454408       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I1225 12:39:49.454677       1 main.go:227] handling current node
	I1225 12:39:59.562811       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I1225 12:39:59.562949       1 main.go:227] handling current node
	I1225 12:40:09.575023       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I1225 12:40:09.575258       1 main.go:227] handling current node
	I1225 12:40:19.589732       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I1225 12:40:19.589942       1 main.go:227] handling current node
	I1225 12:40:29.596395       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I1225 12:40:29.596612       1 main.go:227] handling current node
	I1225 12:40:29.596643       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I1225 12:40:29.596664       1 main.go:250] Node multinode-544936-m02 has CIDR [10.244.1.0/24] 
	I1225 12:40:29.596962       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.205 Flags: [] Table: 0} 
	I1225 12:40:39.612866       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I1225 12:40:39.612919       1 main.go:227] handling current node
	I1225 12:40:39.612942       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I1225 12:40:39.612948       1 main.go:250] Node multinode-544936-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [56252699573fb5c34b37211ca1a9ececabb95cc435645ed96571c2488913e82e] <==
	I1225 12:39:28.087836       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 12:39:28.087842       1 cache.go:39] Caches are synced for autoregister controller
	E1225 12:39:28.116748       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1225 12:39:28.124338       1 shared_informer.go:318] Caches are synced for configmaps
	I1225 12:39:28.126404       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1225 12:39:28.126462       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1225 12:39:28.130798       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1225 12:39:28.131462       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1225 12:39:28.132637       1 controller.go:624] quota admission added evaluator for: namespaces
	I1225 12:39:28.321547       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 12:39:28.933297       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1225 12:39:28.938342       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1225 12:39:28.938400       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 12:39:29.595854       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 12:39:29.655587       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 12:39:29.755781       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1225 12:39:29.765193       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.21]
	I1225 12:39:29.766079       1 controller.go:624] quota admission added evaluator for: endpoints
	I1225 12:39:29.779411       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 12:39:30.025450       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1225 12:39:31.106459       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1225 12:39:31.120794       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1225 12:39:31.136724       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1225 12:39:43.795503       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1225 12:39:43.921952       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4b6625c84f1fe304d7204b7e6888d4aeff3146b2831f6abeb55ff3f0b8437b5c] <==
	I1225 12:39:44.505295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.086237ms"
	I1225 12:39:44.505413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.206µs"
	I1225 12:39:49.907772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="234.467µs"
	I1225 12:39:49.941097       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.959µs"
	I1225 12:39:51.487629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.274625ms"
	I1225 12:39:51.490616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.392365ms"
	I1225 12:39:53.113923       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1225 12:40:29.323574       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-544936-m02\" does not exist"
	I1225 12:40:29.334194       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-544936-m02" podCIDRs=["10.244.1.0/24"]
	I1225 12:40:29.349383       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7z5x6"
	I1225 12:40:29.358508       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mjlfm"
	I1225 12:40:33.120966       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-544936-m02"
	I1225 12:40:33.121344       1 event.go:307] "Event occurred" object="multinode-544936-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-544936-m02 event: Registered Node multinode-544936-m02 in Controller"
	I1225 12:40:37.600922       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-544936-m02"
	I1225 12:40:40.156185       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1225 12:40:40.170617       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-z5f74"
	I1225 12:40:40.188436       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-qn48b"
	I1225 12:40:40.208353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.016145ms"
	I1225 12:40:40.244417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.988015ms"
	I1225 12:40:40.261502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.018554ms"
	I1225 12:40:40.261608       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.964µs"
	I1225 12:40:42.367469       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.726113ms"
	I1225 12:40:42.368581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.906µs"
	I1225 12:40:42.643388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.366423ms"
	I1225 12:40:42.643686       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.482µs"
	
	
	==> kube-proxy [2a4acb9ee46cadc0cb527999afbf105652f43876276822296843a0e176fa44a4] <==
	I1225 12:39:46.185342       1 server_others.go:69] "Using iptables proxy"
	I1225 12:39:46.203058       1 node.go:141] Successfully retrieved node IP: 192.168.39.21
	I1225 12:39:46.258218       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1225 12:39:46.258263       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1225 12:39:46.263164       1 server_others.go:152] "Using iptables Proxier"
	I1225 12:39:46.263262       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1225 12:39:46.263461       1 server.go:846] "Version info" version="v1.28.4"
	I1225 12:39:46.263509       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 12:39:46.264769       1 config.go:188] "Starting service config controller"
	I1225 12:39:46.264876       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1225 12:39:46.264925       1 config.go:97] "Starting endpoint slice config controller"
	I1225 12:39:46.264930       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1225 12:39:46.265495       1 config.go:315] "Starting node config controller"
	I1225 12:39:46.265533       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1225 12:39:46.365061       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1225 12:39:46.365213       1 shared_informer.go:318] Caches are synced for service config
	I1225 12:39:46.365600       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [854ce6c702a5fd166cc78c7d45e349b11350323e46fc4a67c2291ed45ccdcbfb] <==
	W1225 12:39:28.993704       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1225 12:39:28.993759       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1225 12:39:29.004210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1225 12:39:29.004259       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1225 12:39:29.015345       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1225 12:39:29.015427       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1225 12:39:29.024168       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1225 12:39:29.024250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1225 12:39:29.041933       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1225 12:39:29.042022       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1225 12:39:29.063568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1225 12:39:29.063730       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1225 12:39:29.072073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1225 12:39:29.072229       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1225 12:39:29.103368       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1225 12:39:29.103421       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1225 12:39:29.226762       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1225 12:39:29.226921       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1225 12:39:29.241296       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1225 12:39:29.241321       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1225 12:39:29.264563       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1225 12:39:29.264736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1225 12:39:29.495932       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1225 12:39:29.496097       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1225 12:39:31.563270       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2023-12-25 12:38:59 UTC, ends at Mon 2023-12-25 12:40:46 UTC. --
	Dec 25 12:39:43 multinode-544936 kubelet[1263]: I1225 12:39:43.927959    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14699a0d-601b-4bc3-9584-7ac67822a926-xtables-lock\") pod \"kube-proxy-k4jc7\" (UID: \"14699a0d-601b-4bc3-9584-7ac67822a926\") " pod="kube-system/kube-proxy-k4jc7"
	Dec 25 12:39:43 multinode-544936 kubelet[1263]: I1225 12:39:43.927979    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14699a0d-601b-4bc3-9584-7ac67822a926-lib-modules\") pod \"kube-proxy-k4jc7\" (UID: \"14699a0d-601b-4bc3-9584-7ac67822a926\") " pod="kube-system/kube-proxy-k4jc7"
	Dec 25 12:39:43 multinode-544936 kubelet[1263]: I1225 12:39:43.928002    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rl4v\" (UniqueName: \"kubernetes.io/projected/14699a0d-601b-4bc3-9584-7ac67822a926-kube-api-access-9rl4v\") pod \"kube-proxy-k4jc7\" (UID: \"14699a0d-601b-4bc3-9584-7ac67822a926\") " pod="kube-system/kube-proxy-k4jc7"
	Dec 25 12:39:43 multinode-544936 kubelet[1263]: I1225 12:39:43.928024    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8cfe7daa-3fc7-485a-8794-117466297c5a-cni-cfg\") pod \"kindnet-2hjhm\" (UID: \"8cfe7daa-3fc7-485a-8794-117466297c5a\") " pod="kube-system/kindnet-2hjhm"
	Dec 25 12:39:43 multinode-544936 kubelet[1263]: I1225 12:39:43.928045    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrn5m\" (UniqueName: \"kubernetes.io/projected/8cfe7daa-3fc7-485a-8794-117466297c5a-kube-api-access-xrn5m\") pod \"kindnet-2hjhm\" (UID: \"8cfe7daa-3fc7-485a-8794-117466297c5a\") " pod="kube-system/kindnet-2hjhm"
	Dec 25 12:39:45 multinode-544936 kubelet[1263]: E1225 12:39:45.028915    1263 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 25 12:39:45 multinode-544936 kubelet[1263]: E1225 12:39:45.029159    1263 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14699a0d-601b-4bc3-9584-7ac67822a926-kube-proxy podName:14699a0d-601b-4bc3-9584-7ac67822a926 nodeName:}" failed. No retries permitted until 2023-12-25 12:39:45.529008096 +0000 UTC m=+14.450920074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/14699a0d-601b-4bc3-9584-7ac67822a926-kube-proxy") pod "kube-proxy-k4jc7" (UID: "14699a0d-601b-4bc3-9584-7ac67822a926") : failed to sync configmap cache: timed out waiting for the condition
	Dec 25 12:39:49 multinode-544936 kubelet[1263]: I1225 12:39:49.415585    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-k4jc7" podStartSLOduration=6.41554586 podCreationTimestamp="2023-12-25 12:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-25 12:39:46.399255055 +0000 UTC m=+15.321167037" watchObservedRunningTime="2023-12-25 12:39:49.41554586 +0000 UTC m=+18.337457881"
	Dec 25 12:39:49 multinode-544936 kubelet[1263]: I1225 12:39:49.867966    1263 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 25 12:39:49 multinode-544936 kubelet[1263]: I1225 12:39:49.903868    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2hjhm" podStartSLOduration=6.903824556 podCreationTimestamp="2023-12-25 12:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-25 12:39:49.416687646 +0000 UTC m=+18.338599626" watchObservedRunningTime="2023-12-25 12:39:49.903824556 +0000 UTC m=+18.825736538"
	Dec 25 12:39:49 multinode-544936 kubelet[1263]: I1225 12:39:49.903996    1263 topology_manager.go:215] "Topology Admit Handler" podUID="4f4e21f4-8e73-4b81-a080-c42b6980ee3b" podNamespace="kube-system" podName="coredns-5dd5756b68-mg2zk"
	Dec 25 12:39:49 multinode-544936 kubelet[1263]: I1225 12:39:49.919187    1263 topology_manager.go:215] "Topology Admit Handler" podUID="897346ba-f39d-4771-913e-535bff9ca6b7" podNamespace="kube-system" podName="storage-provisioner"
	Dec 25 12:39:49 multinode-544936 kubelet[1263]: I1225 12:39:49.969296    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f4e21f4-8e73-4b81-a080-c42b6980ee3b-config-volume\") pod \"coredns-5dd5756b68-mg2zk\" (UID: \"4f4e21f4-8e73-4b81-a080-c42b6980ee3b\") " pod="kube-system/coredns-5dd5756b68-mg2zk"
	Dec 25 12:39:49 multinode-544936 kubelet[1263]: I1225 12:39:49.969422    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvs7c\" (UniqueName: \"kubernetes.io/projected/897346ba-f39d-4771-913e-535bff9ca6b7-kube-api-access-mvs7c\") pod \"storage-provisioner\" (UID: \"897346ba-f39d-4771-913e-535bff9ca6b7\") " pod="kube-system/storage-provisioner"
	Dec 25 12:39:49 multinode-544936 kubelet[1263]: I1225 12:39:49.969581    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cl82\" (UniqueName: \"kubernetes.io/projected/4f4e21f4-8e73-4b81-a080-c42b6980ee3b-kube-api-access-4cl82\") pod \"coredns-5dd5756b68-mg2zk\" (UID: \"4f4e21f4-8e73-4b81-a080-c42b6980ee3b\") " pod="kube-system/coredns-5dd5756b68-mg2zk"
	Dec 25 12:39:49 multinode-544936 kubelet[1263]: I1225 12:39:49.969619    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/897346ba-f39d-4771-913e-535bff9ca6b7-tmp\") pod \"storage-provisioner\" (UID: \"897346ba-f39d-4771-913e-535bff9ca6b7\") " pod="kube-system/storage-provisioner"
	Dec 25 12:39:51 multinode-544936 kubelet[1263]: I1225 12:39:51.450796    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mg2zk" podStartSLOduration=7.450727472 podCreationTimestamp="2023-12-25 12:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-25 12:39:51.448864441 +0000 UTC m=+20.370776424" watchObservedRunningTime="2023-12-25 12:39:51.450727472 +0000 UTC m=+20.372639455"
	Dec 25 12:39:51 multinode-544936 kubelet[1263]: I1225 12:39:51.450900    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.4508838090000005 podCreationTimestamp="2023-12-25 12:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-25 12:39:51.432993235 +0000 UTC m=+20.354905218" watchObservedRunningTime="2023-12-25 12:39:51.450883809 +0000 UTC m=+20.372795792"
	Dec 25 12:40:31 multinode-544936 kubelet[1263]: E1225 12:40:31.295044    1263 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 12:40:31 multinode-544936 kubelet[1263]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 12:40:31 multinode-544936 kubelet[1263]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 12:40:31 multinode-544936 kubelet[1263]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 12:40:40 multinode-544936 kubelet[1263]: I1225 12:40:40.209557    1263 topology_manager.go:215] "Topology Admit Handler" podUID="91cf6ac2-2bc3-4049-aaed-7863759e58da" podNamespace="default" podName="busybox-5bc68d56bd-qn48b"
	Dec 25 12:40:40 multinode-544936 kubelet[1263]: I1225 12:40:40.408980    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmj5\" (UniqueName: \"kubernetes.io/projected/91cf6ac2-2bc3-4049-aaed-7863759e58da-kube-api-access-brmj5\") pod \"busybox-5bc68d56bd-qn48b\" (UID: \"91cf6ac2-2bc3-4049-aaed-7863759e58da\") " pod="default/busybox-5bc68d56bd-qn48b"
	Dec 25 12:40:42 multinode-544936 kubelet[1263]: I1225 12:40:42.631752    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-qn48b" podStartSLOduration=1.689378536 podCreationTimestamp="2023-12-25 12:40:40 +0000 UTC" firstStartedPulling="2023-12-25 12:40:41.449291763 +0000 UTC m=+70.371203725" lastFinishedPulling="2023-12-25 12:40:42.391618281 +0000 UTC m=+71.313530247" observedRunningTime="2023-12-25 12:40:42.630787752 +0000 UTC m=+71.552699739" watchObservedRunningTime="2023-12-25 12:40:42.631705058 +0000 UTC m=+71.553617038"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-544936 -n multinode-544936
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-544936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (687.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-544936
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-544936
E1225 12:43:56.707019 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:44:07.348121 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-544936: exit status 82 (2m1.539948361s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-544936"  ...
	* Stopping node "multinode-544936"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-544936" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-544936 --wait=true -v=8 --alsologtostderr
E1225 12:45:30.397120 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 12:46:26.363524 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:48:56.706595 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:49:07.348290 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 12:50:19.756439 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:51:26.363313 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:52:49.409878 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-544936 --wait=true -v=8 --alsologtostderr: (9m23.104161614s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-544936
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-544936 -n multinode-544936
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-544936 logs -n 25: (1.655687074s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-544936 ssh -n                                                                 | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-544936 cp multinode-544936-m02:/home/docker/cp-test.txt                       | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2589582466/001/cp-test_multinode-544936-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-544936 ssh -n                                                                 | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-544936 cp multinode-544936-m02:/home/docker/cp-test.txt                       | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936:/home/docker/cp-test_multinode-544936-m02_multinode-544936.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-544936 ssh -n                                                                 | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-544936 ssh -n multinode-544936 sudo cat                                       | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | /home/docker/cp-test_multinode-544936-m02_multinode-544936.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-544936 cp multinode-544936-m02:/home/docker/cp-test.txt                       | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936-m03:/home/docker/cp-test_multinode-544936-m02_multinode-544936-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-544936 ssh -n                                                                 | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-544936 ssh -n multinode-544936-m03 sudo cat                                   | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | /home/docker/cp-test_multinode-544936-m02_multinode-544936-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-544936 cp testdata/cp-test.txt                                                | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-544936 ssh -n                                                                 | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-544936 cp multinode-544936-m03:/home/docker/cp-test.txt                       | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2589582466/001/cp-test_multinode-544936-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-544936 ssh -n                                                                 | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-544936 cp multinode-544936-m03:/home/docker/cp-test.txt                       | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936:/home/docker/cp-test_multinode-544936-m03_multinode-544936.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-544936 ssh -n                                                                 | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-544936 ssh -n multinode-544936 sudo cat                                       | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | /home/docker/cp-test_multinode-544936-m03_multinode-544936.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-544936 cp multinode-544936-m03:/home/docker/cp-test.txt                       | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936-m02:/home/docker/cp-test_multinode-544936-m03_multinode-544936-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-544936 ssh -n                                                                 | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | multinode-544936-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-544936 ssh -n multinode-544936-m02 sudo cat                                   | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	|         | /home/docker/cp-test_multinode-544936-m03_multinode-544936-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-544936 node stop m03                                                          | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:41 UTC |
	| node    | multinode-544936 node start                                                             | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:41 UTC | 25 Dec 23 12:42 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-544936                                                                | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:42 UTC |                     |
	| stop    | -p multinode-544936                                                                     | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:42 UTC |                     |
	| start   | -p multinode-544936                                                                     | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:44 UTC | 25 Dec 23 12:53 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-544936                                                                | multinode-544936 | jenkins | v1.32.0 | 25 Dec 23 12:53 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 12:44:14
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 12:44:14.078710 1466525 out.go:296] Setting OutFile to fd 1 ...
	I1225 12:44:14.078850 1466525 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:44:14.078859 1466525 out.go:309] Setting ErrFile to fd 2...
	I1225 12:44:14.078864 1466525 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:44:14.079080 1466525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 12:44:14.079693 1466525 out.go:303] Setting JSON to false
	I1225 12:44:14.080654 1466525 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":156407,"bootTime":1703351847,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 12:44:14.080723 1466525 start.go:138] virtualization: kvm guest
	I1225 12:44:14.083489 1466525 out.go:177] * [multinode-544936] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 12:44:14.085512 1466525 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 12:44:14.085552 1466525 notify.go:220] Checking for updates...
	I1225 12:44:14.087203 1466525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 12:44:14.089096 1466525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:44:14.090626 1466525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:44:14.092108 1466525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 12:44:14.093703 1466525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 12:44:14.095768 1466525 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:44:14.095932 1466525 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 12:44:14.096568 1466525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:44:14.096645 1466525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:44:14.112207 1466525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33535
	I1225 12:44:14.112649 1466525 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:44:14.113235 1466525 main.go:141] libmachine: Using API Version  1
	I1225 12:44:14.113262 1466525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:44:14.113569 1466525 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:44:14.113722 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:44:14.152744 1466525 out.go:177] * Using the kvm2 driver based on existing profile
	I1225 12:44:14.154205 1466525 start.go:298] selected driver: kvm2
	I1225 12:44:14.154229 1466525 start.go:902] validating driver "kvm2" against &{Name:multinode-544936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.54 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:44:14.154531 1466525 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 12:44:14.154918 1466525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 12:44:14.155008 1466525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 12:44:14.170773 1466525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 12:44:14.171534 1466525 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 12:44:14.171608 1466525 cni.go:84] Creating CNI manager for ""
	I1225 12:44:14.171624 1466525 cni.go:136] 3 nodes found, recommending kindnet
	I1225 12:44:14.171634 1466525 start_flags.go:323] config:
	{Name:multinode-544936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-544936 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.54 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:44:14.171851 1466525 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 12:44:14.173986 1466525 out.go:177] * Starting control plane node multinode-544936 in cluster multinode-544936
	I1225 12:44:14.175470 1466525 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 12:44:14.175516 1466525 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1225 12:44:14.175531 1466525 cache.go:56] Caching tarball of preloaded images
	I1225 12:44:14.175617 1466525 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 12:44:14.175629 1466525 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1225 12:44:14.175766 1466525 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/config.json ...
	I1225 12:44:14.175968 1466525 start.go:365] acquiring machines lock for multinode-544936: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 12:44:14.176013 1466525 start.go:369] acquired machines lock for "multinode-544936" in 24.734µs
	I1225 12:44:14.176029 1466525 start.go:96] Skipping create...Using existing machine configuration
	I1225 12:44:14.176036 1466525 fix.go:54] fixHost starting: 
	I1225 12:44:14.176282 1466525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:44:14.176325 1466525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:44:14.191324 1466525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43715
	I1225 12:44:14.191776 1466525 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:44:14.192273 1466525 main.go:141] libmachine: Using API Version  1
	I1225 12:44:14.192299 1466525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:44:14.192629 1466525 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:44:14.192847 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:44:14.193011 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetState
	I1225 12:44:14.194776 1466525 fix.go:102] recreateIfNeeded on multinode-544936: state=Running err=<nil>
	W1225 12:44:14.194798 1466525 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 12:44:14.196874 1466525 out.go:177] * Updating the running kvm2 "multinode-544936" VM ...
	I1225 12:44:14.198112 1466525 machine.go:88] provisioning docker machine ...
	I1225 12:44:14.198137 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:44:14.198425 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetMachineName
	I1225 12:44:14.198626 1466525 buildroot.go:166] provisioning hostname "multinode-544936"
	I1225 12:44:14.198651 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetMachineName
	I1225 12:44:14.198817 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:44:14.201287 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:44:14.201807 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:44:14.201835 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:44:14.201947 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:44:14.202246 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:44:14.202416 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:44:14.202593 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:44:14.202779 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:44:14.203128 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I1225 12:44:14.203147 1466525 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-544936 && echo "multinode-544936" | sudo tee /etc/hostname
	I1225 12:44:32.558859 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:44:38.638794 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:44:41.710821 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:44:47.790813 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:44:50.862855 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:44:56.942764 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:00.014759 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:06.094792 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:09.166722 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:15.246754 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:18.318741 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:24.398815 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:27.470783 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:33.550813 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:36.622800 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:42.702773 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:45.774748 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:51.854803 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:45:54.926837 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:01.006783 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:04.078739 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:10.158777 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:13.230728 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:19.310832 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:22.382830 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:28.462772 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:31.534855 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:37.614746 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:40.686817 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:46.766703 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:49.838755 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:55.918757 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:46:58.990693 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:05.070831 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:08.142777 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:14.222824 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:17.294865 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:23.374824 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:26.446800 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:32.526758 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:35.598740 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:41.678721 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:44.750712 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:50.830788 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:53.902819 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:47:59.982728 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:03.054697 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:09.134748 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:12.206747 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:18.286876 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:21.358793 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:27.438787 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:30.510732 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:36.590839 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:39.662838 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:45.742766 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:48.814782 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:54.894748 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:48:57.966833 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:49:04.046659 1466525 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.21:22: connect: no route to host
	I1225 12:49:07.049022 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 12:49:07.049068 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:49:07.051292 1466525 machine.go:91] provisioned docker machine in 4m52.853148587s
	I1225 12:49:07.051358 1466525 fix.go:56] fixHost completed within 4m52.875321841s
	I1225 12:49:07.051371 1466525 start.go:83] releasing machines lock for "multinode-544936", held for 4m52.875348016s
	W1225 12:49:07.051399 1466525 start.go:694] error starting host: provision: host is not running
	W1225 12:49:07.051566 1466525 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1225 12:49:07.051584 1466525 start.go:709] Will try again in 5 seconds ...
	I1225 12:49:12.053701 1466525 start.go:365] acquiring machines lock for multinode-544936: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 12:49:12.053853 1466525 start.go:369] acquired machines lock for "multinode-544936" in 87.406µs
	I1225 12:49:12.053912 1466525 start.go:96] Skipping create...Using existing machine configuration
	I1225 12:49:12.053924 1466525 fix.go:54] fixHost starting: 
	I1225 12:49:12.054358 1466525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:49:12.054395 1466525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:49:12.070129 1466525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41509
	I1225 12:49:12.070635 1466525 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:49:12.071153 1466525 main.go:141] libmachine: Using API Version  1
	I1225 12:49:12.071176 1466525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:49:12.071620 1466525 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:49:12.071847 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:49:12.072010 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetState
	I1225 12:49:12.073866 1466525 fix.go:102] recreateIfNeeded on multinode-544936: state=Stopped err=<nil>
	I1225 12:49:12.073894 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	W1225 12:49:12.074070 1466525 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 12:49:12.076712 1466525 out.go:177] * Restarting existing kvm2 VM for "multinode-544936" ...
	I1225 12:49:12.078389 1466525 main.go:141] libmachine: (multinode-544936) Calling .Start
	I1225 12:49:12.078591 1466525 main.go:141] libmachine: (multinode-544936) Ensuring networks are active...
	I1225 12:49:12.079433 1466525 main.go:141] libmachine: (multinode-544936) Ensuring network default is active
	I1225 12:49:12.079918 1466525 main.go:141] libmachine: (multinode-544936) Ensuring network mk-multinode-544936 is active
	I1225 12:49:12.080326 1466525 main.go:141] libmachine: (multinode-544936) Getting domain xml...
	I1225 12:49:12.081179 1466525 main.go:141] libmachine: (multinode-544936) Creating domain...
	I1225 12:49:13.352577 1466525 main.go:141] libmachine: (multinode-544936) Waiting to get IP...
	I1225 12:49:13.353510 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:13.354399 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:13.354505 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:13.354332 1467340 retry.go:31] will retry after 243.04727ms: waiting for machine to come up
	I1225 12:49:13.599269 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:13.599823 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:13.599850 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:13.599741 1467340 retry.go:31] will retry after 268.016968ms: waiting for machine to come up
	I1225 12:49:13.869360 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:13.869881 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:13.869923 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:13.869821 1467340 retry.go:31] will retry after 434.134029ms: waiting for machine to come up
	I1225 12:49:14.305549 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:14.306008 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:14.306049 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:14.305967 1467340 retry.go:31] will retry after 509.397916ms: waiting for machine to come up
	I1225 12:49:14.816663 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:14.817232 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:14.817267 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:14.817179 1467340 retry.go:31] will retry after 651.311094ms: waiting for machine to come up
	I1225 12:49:15.470330 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:15.470875 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:15.470895 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:15.470825 1467340 retry.go:31] will retry after 657.645302ms: waiting for machine to come up
	I1225 12:49:16.129655 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:16.130141 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:16.130175 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:16.130087 1467340 retry.go:31] will retry after 791.200973ms: waiting for machine to come up
	I1225 12:49:16.923162 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:16.923739 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:16.923763 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:16.923674 1467340 retry.go:31] will retry after 1.35939635s: waiting for machine to come up
	I1225 12:49:18.284995 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:18.285498 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:18.285525 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:18.285408 1467340 retry.go:31] will retry after 1.845622361s: waiting for machine to come up
	I1225 12:49:20.133500 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:20.133972 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:20.133996 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:20.133925 1467340 retry.go:31] will retry after 2.014848531s: waiting for machine to come up
	I1225 12:49:22.150520 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:22.151039 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:22.151068 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:22.151003 1467340 retry.go:31] will retry after 2.027274711s: waiting for machine to come up
	I1225 12:49:24.180146 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:24.180653 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:24.180685 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:24.180596 1467340 retry.go:31] will retry after 3.624500062s: waiting for machine to come up
	I1225 12:49:27.806250 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:27.806723 1466525 main.go:141] libmachine: (multinode-544936) DBG | unable to find current IP address of domain multinode-544936 in network mk-multinode-544936
	I1225 12:49:27.806748 1466525 main.go:141] libmachine: (multinode-544936) DBG | I1225 12:49:27.806661 1467340 retry.go:31] will retry after 3.541898829s: waiting for machine to come up
	I1225 12:49:31.352588 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.352983 1466525 main.go:141] libmachine: (multinode-544936) Found IP for machine: 192.168.39.21
	I1225 12:49:31.353007 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has current primary IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.353014 1466525 main.go:141] libmachine: (multinode-544936) Reserving static IP address...
	I1225 12:49:31.353575 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "multinode-544936", mac: "52:54:00:c0:ee:9c", ip: "192.168.39.21"} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:31.353596 1466525 main.go:141] libmachine: (multinode-544936) DBG | skip adding static IP to network mk-multinode-544936 - found existing host DHCP lease matching {name: "multinode-544936", mac: "52:54:00:c0:ee:9c", ip: "192.168.39.21"}
	I1225 12:49:31.353612 1466525 main.go:141] libmachine: (multinode-544936) DBG | Getting to WaitForSSH function...
	I1225 12:49:31.353618 1466525 main.go:141] libmachine: (multinode-544936) Reserved static IP address: 192.168.39.21
	I1225 12:49:31.353628 1466525 main.go:141] libmachine: (multinode-544936) Waiting for SSH to be available...
	I1225 12:49:31.355823 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.356345 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:31.356369 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.356509 1466525 main.go:141] libmachine: (multinode-544936) DBG | Using SSH client type: external
	I1225 12:49:31.356549 1466525 main.go:141] libmachine: (multinode-544936) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa (-rw-------)
	I1225 12:49:31.356572 1466525 main.go:141] libmachine: (multinode-544936) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 12:49:31.356588 1466525 main.go:141] libmachine: (multinode-544936) DBG | About to run SSH command:
	I1225 12:49:31.356597 1466525 main.go:141] libmachine: (multinode-544936) DBG | exit 0
	I1225 12:49:31.454289 1466525 main.go:141] libmachine: (multinode-544936) DBG | SSH cmd err, output: <nil>: 
	I1225 12:49:31.454736 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetConfigRaw
	I1225 12:49:31.455542 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetIP
	I1225 12:49:31.458261 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.458734 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:31.458770 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.459057 1466525 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/config.json ...
	I1225 12:49:31.459268 1466525 machine.go:88] provisioning docker machine ...
	I1225 12:49:31.459288 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:49:31.459513 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetMachineName
	I1225 12:49:31.459681 1466525 buildroot.go:166] provisioning hostname "multinode-544936"
	I1225 12:49:31.459696 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetMachineName
	I1225 12:49:31.459836 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:49:31.462452 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.462830 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:31.462861 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.463000 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:49:31.463195 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:49:31.463349 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:49:31.463483 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:49:31.463690 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:49:31.464044 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I1225 12:49:31.464059 1466525 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-544936 && echo "multinode-544936" | sudo tee /etc/hostname
	I1225 12:49:31.607557 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-544936
	
	I1225 12:49:31.607593 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:49:31.610314 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.610732 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:31.610770 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.610929 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:49:31.611124 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:49:31.611294 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:49:31.611410 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:49:31.611616 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:49:31.612079 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I1225 12:49:31.612106 1466525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-544936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-544936/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-544936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 12:49:31.750890 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 12:49:31.750921 1466525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 12:49:31.750958 1466525 buildroot.go:174] setting up certificates
	I1225 12:49:31.750970 1466525 provision.go:83] configureAuth start
	I1225 12:49:31.750979 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetMachineName
	I1225 12:49:31.751307 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetIP
	I1225 12:49:31.754139 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.754519 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:31.754548 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.754718 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:49:31.757142 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.757575 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:31.757608 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.757717 1466525 provision.go:138] copyHostCerts
	I1225 12:49:31.757763 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 12:49:31.757822 1466525 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 12:49:31.757842 1466525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 12:49:31.757933 1466525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 12:49:31.758050 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 12:49:31.758085 1466525 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 12:49:31.758092 1466525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 12:49:31.758136 1466525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 12:49:31.758215 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 12:49:31.758241 1466525 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 12:49:31.758250 1466525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 12:49:31.758287 1466525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 12:49:31.758356 1466525 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.multinode-544936 san=[192.168.39.21 192.168.39.21 localhost 127.0.0.1 minikube multinode-544936]
	I1225 12:49:31.939020 1466525 provision.go:172] copyRemoteCerts
	I1225 12:49:31.939121 1466525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 12:49:31.939174 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:49:31.941978 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.942338 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:31.942364 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:31.942591 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:49:31.942802 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:49:31.942944 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:49:31.943076 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:49:32.044865 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1225 12:49:32.044975 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 12:49:32.069307 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1225 12:49:32.069405 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1225 12:49:32.092877 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1225 12:49:32.092968 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 12:49:32.114981 1466525 provision.go:86] duration metric: configureAuth took 363.997322ms
	I1225 12:49:32.115007 1466525 buildroot.go:189] setting minikube options for container-runtime
	I1225 12:49:32.115235 1466525 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:49:32.115323 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:49:32.118508 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:32.118933 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:32.118964 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:32.119262 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:49:32.119484 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:49:32.119686 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:49:32.119836 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:49:32.120007 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:49:32.120320 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I1225 12:49:32.120336 1466525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 12:49:32.448951 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 12:49:32.448985 1466525 machine.go:91] provisioned docker machine in 989.701855ms
	I1225 12:49:32.449000 1466525 start.go:300] post-start starting for "multinode-544936" (driver="kvm2")
	I1225 12:49:32.449012 1466525 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 12:49:32.449055 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:49:32.449409 1466525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 12:49:32.449447 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:49:32.452553 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:32.452963 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:32.453004 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:32.453115 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:49:32.453341 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:49:32.453550 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:49:32.453699 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:49:32.548217 1466525 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 12:49:32.552356 1466525 command_runner.go:130] > NAME=Buildroot
	I1225 12:49:32.552377 1466525 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1225 12:49:32.552382 1466525 command_runner.go:130] > ID=buildroot
	I1225 12:49:32.552387 1466525 command_runner.go:130] > VERSION_ID=2021.02.12
	I1225 12:49:32.552392 1466525 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1225 12:49:32.552550 1466525 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 12:49:32.552575 1466525 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 12:49:32.552655 1466525 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 12:49:32.552771 1466525 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 12:49:32.552793 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> /etc/ssl/certs/14497972.pem
	I1225 12:49:32.552917 1466525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 12:49:32.561421 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 12:49:32.585194 1466525 start.go:303] post-start completed in 136.175113ms
	I1225 12:49:32.585231 1466525 fix.go:56] fixHost completed within 20.531307091s
	I1225 12:49:32.585260 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:49:32.588217 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:32.588573 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:32.588622 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:32.588739 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:49:32.588982 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:49:32.589157 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:49:32.589310 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:49:32.589477 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:49:32.589819 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I1225 12:49:32.589835 1466525 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 12:49:32.723914 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703508572.669911255
	
	I1225 12:49:32.723950 1466525 fix.go:206] guest clock: 1703508572.669911255
	I1225 12:49:32.723962 1466525 fix.go:219] Guest: 2023-12-25 12:49:32.669911255 +0000 UTC Remote: 2023-12-25 12:49:32.585236103 +0000 UTC m=+318.562414636 (delta=84.675152ms)
	I1225 12:49:32.724020 1466525 fix.go:190] guest clock delta is within tolerance: 84.675152ms
	I1225 12:49:32.724027 1466525 start.go:83] releasing machines lock for "multinode-544936", held for 20.670159079s
	I1225 12:49:32.724065 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:49:32.724373 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetIP
	I1225 12:49:32.727194 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:32.727645 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:32.727678 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:32.727820 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:49:32.728439 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:49:32.728700 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:49:32.728808 1466525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 12:49:32.728860 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:49:32.728937 1466525 ssh_runner.go:195] Run: cat /version.json
	I1225 12:49:32.728988 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:49:32.731487 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:32.731850 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:32.731880 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:32.731911 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:32.732150 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:49:32.732339 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:49:32.732426 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:32.732454 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:32.732489 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:49:32.732649 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:49:32.732661 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:49:32.732803 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:49:32.732934 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:49:32.733098 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:49:32.823282 1466525 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I1225 12:49:32.823452 1466525 ssh_runner.go:195] Run: systemctl --version
	I1225 12:49:32.853560 1466525 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1225 12:49:32.853624 1466525 command_runner.go:130] > systemd 247 (247)
	I1225 12:49:32.853651 1466525 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1225 12:49:32.853733 1466525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 12:49:32.998119 1466525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1225 12:49:33.004485 1466525 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1225 12:49:33.004551 1466525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 12:49:33.004624 1466525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 12:49:33.020584 1466525 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1225 12:49:33.020666 1466525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 12:49:33.020679 1466525 start.go:475] detecting cgroup driver to use...
	I1225 12:49:33.020750 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 12:49:33.034137 1466525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 12:49:33.046628 1466525 docker.go:203] disabling cri-docker service (if available) ...
	I1225 12:49:33.046698 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 12:49:33.059529 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 12:49:33.072686 1466525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 12:49:33.173250 1466525 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1225 12:49:33.173442 1466525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 12:49:33.187916 1466525 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1225 12:49:33.286929 1466525 docker.go:219] disabling docker service ...
	I1225 12:49:33.287033 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 12:49:33.300724 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 12:49:33.312934 1466525 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1225 12:49:33.313322 1466525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 12:49:33.327838 1466525 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1225 12:49:33.420141 1466525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 12:49:33.433916 1466525 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1225 12:49:33.434323 1466525 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1225 12:49:33.521197 1466525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 12:49:33.534820 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 12:49:33.552567 1466525 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1225 12:49:33.552617 1466525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 12:49:33.552685 1466525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:49:33.564604 1466525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 12:49:33.564673 1466525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:49:33.576587 1466525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:49:33.589001 1466525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:49:33.601740 1466525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 12:49:33.613929 1466525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 12:49:33.623529 1466525 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 12:49:33.623574 1466525 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 12:49:33.623626 1466525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 12:49:33.638383 1466525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 12:49:33.647971 1466525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 12:49:33.747576 1466525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 12:49:33.923009 1466525 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 12:49:33.923086 1466525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 12:49:33.927638 1466525 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1225 12:49:33.927664 1466525 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1225 12:49:33.927671 1466525 command_runner.go:130] > Device: 16h/22d	Inode: 801         Links: 1
	I1225 12:49:33.927677 1466525 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1225 12:49:33.927683 1466525 command_runner.go:130] > Access: 2023-12-25 12:49:33.853350221 +0000
	I1225 12:49:33.927693 1466525 command_runner.go:130] > Modify: 2023-12-25 12:49:33.853350221 +0000
	I1225 12:49:33.927701 1466525 command_runner.go:130] > Change: 2023-12-25 12:49:33.853350221 +0000
	I1225 12:49:33.927707 1466525 command_runner.go:130] >  Birth: -
	I1225 12:49:33.928012 1466525 start.go:543] Will wait 60s for crictl version
	I1225 12:49:33.928098 1466525 ssh_runner.go:195] Run: which crictl
	I1225 12:49:33.931553 1466525 command_runner.go:130] > /usr/bin/crictl
	I1225 12:49:33.931773 1466525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 12:49:33.973675 1466525 command_runner.go:130] > Version:  0.1.0
	I1225 12:49:33.973704 1466525 command_runner.go:130] > RuntimeName:  cri-o
	I1225 12:49:33.973711 1466525 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1225 12:49:33.973719 1466525 command_runner.go:130] > RuntimeApiVersion:  v1
	I1225 12:49:33.973751 1466525 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 12:49:33.973839 1466525 ssh_runner.go:195] Run: crio --version
	I1225 12:49:34.018576 1466525 command_runner.go:130] > crio version 1.24.1
	I1225 12:49:34.018598 1466525 command_runner.go:130] > Version:          1.24.1
	I1225 12:49:34.018608 1466525 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1225 12:49:34.018616 1466525 command_runner.go:130] > GitTreeState:     dirty
	I1225 12:49:34.018622 1466525 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I1225 12:49:34.018627 1466525 command_runner.go:130] > GoVersion:        go1.19.9
	I1225 12:49:34.018632 1466525 command_runner.go:130] > Compiler:         gc
	I1225 12:49:34.018639 1466525 command_runner.go:130] > Platform:         linux/amd64
	I1225 12:49:34.018652 1466525 command_runner.go:130] > Linkmode:         dynamic
	I1225 12:49:34.018667 1466525 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1225 12:49:34.018673 1466525 command_runner.go:130] > SeccompEnabled:   true
	I1225 12:49:34.018682 1466525 command_runner.go:130] > AppArmorEnabled:  false
	I1225 12:49:34.018773 1466525 ssh_runner.go:195] Run: crio --version
	I1225 12:49:34.070071 1466525 command_runner.go:130] > crio version 1.24.1
	I1225 12:49:34.070100 1466525 command_runner.go:130] > Version:          1.24.1
	I1225 12:49:34.070116 1466525 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1225 12:49:34.070121 1466525 command_runner.go:130] > GitTreeState:     dirty
	I1225 12:49:34.070127 1466525 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I1225 12:49:34.070132 1466525 command_runner.go:130] > GoVersion:        go1.19.9
	I1225 12:49:34.070136 1466525 command_runner.go:130] > Compiler:         gc
	I1225 12:49:34.070144 1466525 command_runner.go:130] > Platform:         linux/amd64
	I1225 12:49:34.070153 1466525 command_runner.go:130] > Linkmode:         dynamic
	I1225 12:49:34.070165 1466525 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1225 12:49:34.070176 1466525 command_runner.go:130] > SeccompEnabled:   true
	I1225 12:49:34.070182 1466525 command_runner.go:130] > AppArmorEnabled:  false
	I1225 12:49:34.072133 1466525 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 12:49:34.073401 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetIP
	I1225 12:49:34.076582 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:34.077007 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:49:34.077040 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:49:34.077283 1466525 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 12:49:34.081515 1466525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 12:49:34.093196 1466525 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 12:49:34.093249 1466525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 12:49:34.132169 1466525 command_runner.go:130] > {
	I1225 12:49:34.132197 1466525 command_runner.go:130] >   "images": [
	I1225 12:49:34.132216 1466525 command_runner.go:130] >     {
	I1225 12:49:34.132229 1466525 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1225 12:49:34.132237 1466525 command_runner.go:130] >       "repoTags": [
	I1225 12:49:34.132250 1466525 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1225 12:49:34.132257 1466525 command_runner.go:130] >       ],
	I1225 12:49:34.132265 1466525 command_runner.go:130] >       "repoDigests": [
	I1225 12:49:34.132280 1466525 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1225 12:49:34.132295 1466525 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1225 12:49:34.132301 1466525 command_runner.go:130] >       ],
	I1225 12:49:34.132306 1466525 command_runner.go:130] >       "size": "750414",
	I1225 12:49:34.132314 1466525 command_runner.go:130] >       "uid": {
	I1225 12:49:34.132324 1466525 command_runner.go:130] >         "value": "65535"
	I1225 12:49:34.132333 1466525 command_runner.go:130] >       },
	I1225 12:49:34.132341 1466525 command_runner.go:130] >       "username": "",
	I1225 12:49:34.132353 1466525 command_runner.go:130] >       "spec": null,
	I1225 12:49:34.132360 1466525 command_runner.go:130] >       "pinned": false
	I1225 12:49:34.132369 1466525 command_runner.go:130] >     }
	I1225 12:49:34.132375 1466525 command_runner.go:130] >   ]
	I1225 12:49:34.132387 1466525 command_runner.go:130] > }
	I1225 12:49:34.132572 1466525 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 12:49:34.132645 1466525 ssh_runner.go:195] Run: which lz4
	I1225 12:49:34.136242 1466525 command_runner.go:130] > /usr/bin/lz4
	I1225 12:49:34.136395 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1225 12:49:34.136502 1466525 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 12:49:34.140508 1466525 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 12:49:34.140546 1466525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 12:49:34.140573 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 12:49:36.010576 1466525 crio.go:444] Took 1.874103 seconds to copy over tarball
	I1225 12:49:36.010652 1466525 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 12:49:39.361764 1466525 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.351079854s)
	I1225 12:49:39.477818 1466525 crio.go:451] Took 3.467206 seconds to extract the tarball
	I1225 12:49:39.477836 1466525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 12:49:39.520039 1466525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 12:49:39.625811 1466525 command_runner.go:130] > {
	I1225 12:49:39.625832 1466525 command_runner.go:130] >   "images": [
	I1225 12:49:39.625837 1466525 command_runner.go:130] >     {
	I1225 12:49:39.625844 1466525 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1225 12:49:39.625849 1466525 command_runner.go:130] >       "repoTags": [
	I1225 12:49:39.625855 1466525 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1225 12:49:39.625859 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.625863 1466525 command_runner.go:130] >       "repoDigests": [
	I1225 12:49:39.625872 1466525 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1225 12:49:39.625879 1466525 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1225 12:49:39.625883 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.625890 1466525 command_runner.go:130] >       "size": "65258016",
	I1225 12:49:39.625895 1466525 command_runner.go:130] >       "uid": null,
	I1225 12:49:39.625901 1466525 command_runner.go:130] >       "username": "",
	I1225 12:49:39.625907 1466525 command_runner.go:130] >       "spec": null,
	I1225 12:49:39.625917 1466525 command_runner.go:130] >       "pinned": false
	I1225 12:49:39.625921 1466525 command_runner.go:130] >     },
	I1225 12:49:39.625924 1466525 command_runner.go:130] >     {
	I1225 12:49:39.625930 1466525 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1225 12:49:39.625937 1466525 command_runner.go:130] >       "repoTags": [
	I1225 12:49:39.625942 1466525 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1225 12:49:39.625948 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.625952 1466525 command_runner.go:130] >       "repoDigests": [
	I1225 12:49:39.625962 1466525 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1225 12:49:39.625970 1466525 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1225 12:49:39.625977 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.625991 1466525 command_runner.go:130] >       "size": "31470524",
	I1225 12:49:39.626001 1466525 command_runner.go:130] >       "uid": null,
	I1225 12:49:39.626010 1466525 command_runner.go:130] >       "username": "",
	I1225 12:49:39.626032 1466525 command_runner.go:130] >       "spec": null,
	I1225 12:49:39.626042 1466525 command_runner.go:130] >       "pinned": false
	I1225 12:49:39.626050 1466525 command_runner.go:130] >     },
	I1225 12:49:39.626054 1466525 command_runner.go:130] >     {
	I1225 12:49:39.626060 1466525 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1225 12:49:39.626067 1466525 command_runner.go:130] >       "repoTags": [
	I1225 12:49:39.626072 1466525 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1225 12:49:39.626078 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626083 1466525 command_runner.go:130] >       "repoDigests": [
	I1225 12:49:39.626092 1466525 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1225 12:49:39.626102 1466525 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1225 12:49:39.626108 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626113 1466525 command_runner.go:130] >       "size": "53621675",
	I1225 12:49:39.626119 1466525 command_runner.go:130] >       "uid": null,
	I1225 12:49:39.626124 1466525 command_runner.go:130] >       "username": "",
	I1225 12:49:39.626130 1466525 command_runner.go:130] >       "spec": null,
	I1225 12:49:39.626135 1466525 command_runner.go:130] >       "pinned": false
	I1225 12:49:39.626141 1466525 command_runner.go:130] >     },
	I1225 12:49:39.626147 1466525 command_runner.go:130] >     {
	I1225 12:49:39.626156 1466525 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1225 12:49:39.626162 1466525 command_runner.go:130] >       "repoTags": [
	I1225 12:49:39.626170 1466525 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1225 12:49:39.626176 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626180 1466525 command_runner.go:130] >       "repoDigests": [
	I1225 12:49:39.626189 1466525 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1225 12:49:39.626199 1466525 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1225 12:49:39.626211 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626218 1466525 command_runner.go:130] >       "size": "295456551",
	I1225 12:49:39.626224 1466525 command_runner.go:130] >       "uid": {
	I1225 12:49:39.626229 1466525 command_runner.go:130] >         "value": "0"
	I1225 12:49:39.626235 1466525 command_runner.go:130] >       },
	I1225 12:49:39.626239 1466525 command_runner.go:130] >       "username": "",
	I1225 12:49:39.626250 1466525 command_runner.go:130] >       "spec": null,
	I1225 12:49:39.626257 1466525 command_runner.go:130] >       "pinned": false
	I1225 12:49:39.626260 1466525 command_runner.go:130] >     },
	I1225 12:49:39.626264 1466525 command_runner.go:130] >     {
	I1225 12:49:39.626277 1466525 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1225 12:49:39.626284 1466525 command_runner.go:130] >       "repoTags": [
	I1225 12:49:39.626294 1466525 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1225 12:49:39.626301 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626305 1466525 command_runner.go:130] >       "repoDigests": [
	I1225 12:49:39.626318 1466525 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1225 12:49:39.626327 1466525 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1225 12:49:39.626334 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626339 1466525 command_runner.go:130] >       "size": "127226832",
	I1225 12:49:39.626345 1466525 command_runner.go:130] >       "uid": {
	I1225 12:49:39.626350 1466525 command_runner.go:130] >         "value": "0"
	I1225 12:49:39.626356 1466525 command_runner.go:130] >       },
	I1225 12:49:39.626360 1466525 command_runner.go:130] >       "username": "",
	I1225 12:49:39.626364 1466525 command_runner.go:130] >       "spec": null,
	I1225 12:49:39.626370 1466525 command_runner.go:130] >       "pinned": false
	I1225 12:49:39.626374 1466525 command_runner.go:130] >     },
	I1225 12:49:39.626380 1466525 command_runner.go:130] >     {
	I1225 12:49:39.626386 1466525 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1225 12:49:39.626395 1466525 command_runner.go:130] >       "repoTags": [
	I1225 12:49:39.626402 1466525 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1225 12:49:39.626407 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626411 1466525 command_runner.go:130] >       "repoDigests": [
	I1225 12:49:39.626421 1466525 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1225 12:49:39.626431 1466525 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1225 12:49:39.626455 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626462 1466525 command_runner.go:130] >       "size": "123261750",
	I1225 12:49:39.626472 1466525 command_runner.go:130] >       "uid": {
	I1225 12:49:39.626482 1466525 command_runner.go:130] >         "value": "0"
	I1225 12:49:39.626491 1466525 command_runner.go:130] >       },
	I1225 12:49:39.626505 1466525 command_runner.go:130] >       "username": "",
	I1225 12:49:39.626515 1466525 command_runner.go:130] >       "spec": null,
	I1225 12:49:39.626524 1466525 command_runner.go:130] >       "pinned": false
	I1225 12:49:39.626533 1466525 command_runner.go:130] >     },
	I1225 12:49:39.626542 1466525 command_runner.go:130] >     {
	I1225 12:49:39.626551 1466525 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1225 12:49:39.626561 1466525 command_runner.go:130] >       "repoTags": [
	I1225 12:49:39.626576 1466525 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1225 12:49:39.626583 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626587 1466525 command_runner.go:130] >       "repoDigests": [
	I1225 12:49:39.626597 1466525 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1225 12:49:39.626606 1466525 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1225 12:49:39.626612 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626617 1466525 command_runner.go:130] >       "size": "74749335",
	I1225 12:49:39.626623 1466525 command_runner.go:130] >       "uid": null,
	I1225 12:49:39.626627 1466525 command_runner.go:130] >       "username": "",
	I1225 12:49:39.626633 1466525 command_runner.go:130] >       "spec": null,
	I1225 12:49:39.626637 1466525 command_runner.go:130] >       "pinned": false
	I1225 12:49:39.626642 1466525 command_runner.go:130] >     },
	I1225 12:49:39.626646 1466525 command_runner.go:130] >     {
	I1225 12:49:39.626655 1466525 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1225 12:49:39.626662 1466525 command_runner.go:130] >       "repoTags": [
	I1225 12:49:39.626667 1466525 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1225 12:49:39.626673 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626677 1466525 command_runner.go:130] >       "repoDigests": [
	I1225 12:49:39.626702 1466525 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1225 12:49:39.626720 1466525 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1225 12:49:39.626726 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626731 1466525 command_runner.go:130] >       "size": "61551410",
	I1225 12:49:39.626737 1466525 command_runner.go:130] >       "uid": {
	I1225 12:49:39.626741 1466525 command_runner.go:130] >         "value": "0"
	I1225 12:49:39.626747 1466525 command_runner.go:130] >       },
	I1225 12:49:39.626752 1466525 command_runner.go:130] >       "username": "",
	I1225 12:49:39.626758 1466525 command_runner.go:130] >       "spec": null,
	I1225 12:49:39.626762 1466525 command_runner.go:130] >       "pinned": false
	I1225 12:49:39.626768 1466525 command_runner.go:130] >     },
	I1225 12:49:39.626772 1466525 command_runner.go:130] >     {
	I1225 12:49:39.626780 1466525 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1225 12:49:39.626786 1466525 command_runner.go:130] >       "repoTags": [
	I1225 12:49:39.626795 1466525 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1225 12:49:39.626801 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626805 1466525 command_runner.go:130] >       "repoDigests": [
	I1225 12:49:39.626812 1466525 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1225 12:49:39.626826 1466525 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1225 12:49:39.626833 1466525 command_runner.go:130] >       ],
	I1225 12:49:39.626837 1466525 command_runner.go:130] >       "size": "750414",
	I1225 12:49:39.626844 1466525 command_runner.go:130] >       "uid": {
	I1225 12:49:39.626848 1466525 command_runner.go:130] >         "value": "65535"
	I1225 12:49:39.626854 1466525 command_runner.go:130] >       },
	I1225 12:49:39.626858 1466525 command_runner.go:130] >       "username": "",
	I1225 12:49:39.626864 1466525 command_runner.go:130] >       "spec": null,
	I1225 12:49:39.626869 1466525 command_runner.go:130] >       "pinned": false
	I1225 12:49:39.626874 1466525 command_runner.go:130] >     }
	I1225 12:49:39.626878 1466525 command_runner.go:130] >   ]
	I1225 12:49:39.626884 1466525 command_runner.go:130] > }
	I1225 12:49:39.626991 1466525 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 12:49:39.627003 1466525 cache_images.go:84] Images are preloaded, skipping loading
	I1225 12:49:39.627063 1466525 ssh_runner.go:195] Run: crio config
	I1225 12:49:39.676457 1466525 command_runner.go:130] ! time="2023-12-25 12:49:39.621939214Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1225 12:49:39.676507 1466525 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1225 12:49:39.687436 1466525 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1225 12:49:39.687534 1466525 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1225 12:49:39.687556 1466525 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1225 12:49:39.687563 1466525 command_runner.go:130] > #
	I1225 12:49:39.687575 1466525 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1225 12:49:39.687590 1466525 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1225 12:49:39.687606 1466525 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1225 12:49:39.687625 1466525 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1225 12:49:39.687641 1466525 command_runner.go:130] > # reload'.
	I1225 12:49:39.687653 1466525 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1225 12:49:39.687665 1466525 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1225 12:49:39.687680 1466525 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1225 12:49:39.687693 1466525 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1225 12:49:39.687704 1466525 command_runner.go:130] > [crio]
	I1225 12:49:39.687724 1466525 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1225 12:49:39.687737 1466525 command_runner.go:130] > # containers images, in this directory.
	I1225 12:49:39.687746 1466525 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1225 12:49:39.687800 1466525 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1225 12:49:39.687817 1466525 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1225 12:49:39.687831 1466525 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1225 12:49:39.687845 1466525 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1225 12:49:39.687854 1466525 command_runner.go:130] > storage_driver = "overlay"
	I1225 12:49:39.687863 1466525 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1225 12:49:39.687873 1466525 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1225 12:49:39.687882 1466525 command_runner.go:130] > storage_option = [
	I1225 12:49:39.687890 1466525 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1225 12:49:39.687895 1466525 command_runner.go:130] > ]
	I1225 12:49:39.687902 1466525 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1225 12:49:39.687907 1466525 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1225 12:49:39.687912 1466525 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1225 12:49:39.687917 1466525 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1225 12:49:39.687923 1466525 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1225 12:49:39.687931 1466525 command_runner.go:130] > # always happen on a node reboot
	I1225 12:49:39.687936 1466525 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1225 12:49:39.687941 1466525 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1225 12:49:39.687947 1466525 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1225 12:49:39.687959 1466525 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1225 12:49:39.687964 1466525 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1225 12:49:39.687972 1466525 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1225 12:49:39.687979 1466525 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1225 12:49:39.687983 1466525 command_runner.go:130] > # internal_wipe = true
	I1225 12:49:39.687988 1466525 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1225 12:49:39.687994 1466525 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1225 12:49:39.687999 1466525 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1225 12:49:39.688004 1466525 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1225 12:49:39.688010 1466525 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1225 12:49:39.688014 1466525 command_runner.go:130] > [crio.api]
	I1225 12:49:39.688019 1466525 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1225 12:49:39.688024 1466525 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1225 12:49:39.688029 1466525 command_runner.go:130] > # IP address on which the stream server will listen.
	I1225 12:49:39.688037 1466525 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1225 12:49:39.688044 1466525 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1225 12:49:39.688049 1466525 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1225 12:49:39.688053 1466525 command_runner.go:130] > # stream_port = "0"
	I1225 12:49:39.688058 1466525 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1225 12:49:39.688062 1466525 command_runner.go:130] > # stream_enable_tls = false
	I1225 12:49:39.688068 1466525 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1225 12:49:39.688075 1466525 command_runner.go:130] > # stream_idle_timeout = ""
	I1225 12:49:39.688081 1466525 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1225 12:49:39.688087 1466525 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1225 12:49:39.688097 1466525 command_runner.go:130] > # minutes.
	I1225 12:49:39.688102 1466525 command_runner.go:130] > # stream_tls_cert = ""
	I1225 12:49:39.688108 1466525 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1225 12:49:39.688114 1466525 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1225 12:49:39.688124 1466525 command_runner.go:130] > # stream_tls_key = ""
	I1225 12:49:39.688133 1466525 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1225 12:49:39.688139 1466525 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1225 12:49:39.688147 1466525 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1225 12:49:39.688156 1466525 command_runner.go:130] > # stream_tls_ca = ""
	I1225 12:49:39.688167 1466525 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1225 12:49:39.688175 1466525 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1225 12:49:39.688185 1466525 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1225 12:49:39.688193 1466525 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1225 12:49:39.688215 1466525 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1225 12:49:39.688224 1466525 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1225 12:49:39.688229 1466525 command_runner.go:130] > [crio.runtime]
	I1225 12:49:39.688237 1466525 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1225 12:49:39.688245 1466525 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1225 12:49:39.688252 1466525 command_runner.go:130] > # "nofile=1024:2048"
	I1225 12:49:39.688258 1466525 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1225 12:49:39.688265 1466525 command_runner.go:130] > # default_ulimits = [
	I1225 12:49:39.688269 1466525 command_runner.go:130] > # ]
	I1225 12:49:39.688278 1466525 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1225 12:49:39.688284 1466525 command_runner.go:130] > # no_pivot = false
	I1225 12:49:39.688290 1466525 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1225 12:49:39.688299 1466525 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1225 12:49:39.688309 1466525 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1225 12:49:39.688319 1466525 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1225 12:49:39.688324 1466525 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1225 12:49:39.688333 1466525 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1225 12:49:39.688341 1466525 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1225 12:49:39.688348 1466525 command_runner.go:130] > # Cgroup setting for conmon
	I1225 12:49:39.688358 1466525 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1225 12:49:39.688365 1466525 command_runner.go:130] > conmon_cgroup = "pod"
	I1225 12:49:39.688375 1466525 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1225 12:49:39.688389 1466525 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1225 12:49:39.688404 1466525 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1225 12:49:39.688415 1466525 command_runner.go:130] > conmon_env = [
	I1225 12:49:39.688429 1466525 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1225 12:49:39.688439 1466525 command_runner.go:130] > ]
	I1225 12:49:39.688452 1466525 command_runner.go:130] > # Additional environment variables to set for all the
	I1225 12:49:39.688466 1466525 command_runner.go:130] > # containers. These are overridden if set in the
	I1225 12:49:39.688479 1466525 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1225 12:49:39.688490 1466525 command_runner.go:130] > # default_env = [
	I1225 12:49:39.688499 1466525 command_runner.go:130] > # ]
	I1225 12:49:39.688508 1466525 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1225 12:49:39.688512 1466525 command_runner.go:130] > # selinux = false
	I1225 12:49:39.688519 1466525 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1225 12:49:39.688528 1466525 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1225 12:49:39.688534 1466525 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1225 12:49:39.688540 1466525 command_runner.go:130] > # seccomp_profile = ""
	I1225 12:49:39.688547 1466525 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1225 12:49:39.688555 1466525 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1225 12:49:39.688562 1466525 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1225 12:49:39.688568 1466525 command_runner.go:130] > # which might increase security.
	I1225 12:49:39.688573 1466525 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1225 12:49:39.688582 1466525 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1225 12:49:39.688589 1466525 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1225 12:49:39.688598 1466525 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1225 12:49:39.688604 1466525 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1225 12:49:39.688612 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:49:39.688618 1466525 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1225 12:49:39.688631 1466525 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1225 12:49:39.688640 1466525 command_runner.go:130] > # the cgroup blockio controller.
	I1225 12:49:39.688647 1466525 command_runner.go:130] > # blockio_config_file = ""
	I1225 12:49:39.688654 1466525 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1225 12:49:39.688661 1466525 command_runner.go:130] > # irqbalance daemon.
	I1225 12:49:39.688666 1466525 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1225 12:49:39.688675 1466525 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1225 12:49:39.688683 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:49:39.688688 1466525 command_runner.go:130] > # rdt_config_file = ""
	I1225 12:49:39.688694 1466525 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1225 12:49:39.688701 1466525 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1225 12:49:39.688708 1466525 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1225 12:49:39.688715 1466525 command_runner.go:130] > # separate_pull_cgroup = ""
	I1225 12:49:39.688724 1466525 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1225 12:49:39.688733 1466525 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1225 12:49:39.688739 1466525 command_runner.go:130] > # will be added.
	I1225 12:49:39.688744 1466525 command_runner.go:130] > # default_capabilities = [
	I1225 12:49:39.688750 1466525 command_runner.go:130] > # 	"CHOWN",
	I1225 12:49:39.688756 1466525 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1225 12:49:39.688763 1466525 command_runner.go:130] > # 	"FSETID",
	I1225 12:49:39.688768 1466525 command_runner.go:130] > # 	"FOWNER",
	I1225 12:49:39.688774 1466525 command_runner.go:130] > # 	"SETGID",
	I1225 12:49:39.688779 1466525 command_runner.go:130] > # 	"SETUID",
	I1225 12:49:39.688788 1466525 command_runner.go:130] > # 	"SETPCAP",
	I1225 12:49:39.688797 1466525 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1225 12:49:39.688801 1466525 command_runner.go:130] > # 	"KILL",
	I1225 12:49:39.688807 1466525 command_runner.go:130] > # ]
	I1225 12:49:39.688813 1466525 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1225 12:49:39.688822 1466525 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1225 12:49:39.688828 1466525 command_runner.go:130] > # default_sysctls = [
	I1225 12:49:39.688832 1466525 command_runner.go:130] > # ]
	I1225 12:49:39.688839 1466525 command_runner.go:130] > # List of devices on the host that a
	I1225 12:49:39.688849 1466525 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1225 12:49:39.688856 1466525 command_runner.go:130] > # allowed_devices = [
	I1225 12:49:39.688860 1466525 command_runner.go:130] > # 	"/dev/fuse",
	I1225 12:49:39.688864 1466525 command_runner.go:130] > # ]
	I1225 12:49:39.688874 1466525 command_runner.go:130] > # List of additional devices. specified as
	I1225 12:49:39.688885 1466525 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1225 12:49:39.688893 1466525 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1225 12:49:39.688925 1466525 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1225 12:49:39.688933 1466525 command_runner.go:130] > # additional_devices = [
	I1225 12:49:39.688936 1466525 command_runner.go:130] > # ]
	I1225 12:49:39.688945 1466525 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1225 12:49:39.688949 1466525 command_runner.go:130] > # cdi_spec_dirs = [
	I1225 12:49:39.688955 1466525 command_runner.go:130] > # 	"/etc/cdi",
	I1225 12:49:39.688960 1466525 command_runner.go:130] > # 	"/var/run/cdi",
	I1225 12:49:39.688966 1466525 command_runner.go:130] > # ]
	I1225 12:49:39.688972 1466525 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1225 12:49:39.688981 1466525 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1225 12:49:39.688989 1466525 command_runner.go:130] > # Defaults to false.
	I1225 12:49:39.688994 1466525 command_runner.go:130] > # device_ownership_from_security_context = false
	I1225 12:49:39.689003 1466525 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1225 12:49:39.689011 1466525 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1225 12:49:39.689018 1466525 command_runner.go:130] > # hooks_dir = [
	I1225 12:49:39.689026 1466525 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1225 12:49:39.689033 1466525 command_runner.go:130] > # ]
	I1225 12:49:39.689039 1466525 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1225 12:49:39.689048 1466525 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1225 12:49:39.689058 1466525 command_runner.go:130] > # its default mounts from the following two files:
	I1225 12:49:39.689065 1466525 command_runner.go:130] > #
	I1225 12:49:39.689071 1466525 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1225 12:49:39.689080 1466525 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1225 12:49:39.689088 1466525 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1225 12:49:39.689098 1466525 command_runner.go:130] > #
	I1225 12:49:39.689104 1466525 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1225 12:49:39.689113 1466525 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1225 12:49:39.689122 1466525 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1225 12:49:39.689128 1466525 command_runner.go:130] > #      only add mounts it finds in this file.
	I1225 12:49:39.689133 1466525 command_runner.go:130] > #
	I1225 12:49:39.689139 1466525 command_runner.go:130] > # default_mounts_file = ""
	I1225 12:49:39.689149 1466525 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1225 12:49:39.689159 1466525 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1225 12:49:39.689170 1466525 command_runner.go:130] > pids_limit = 1024
	I1225 12:49:39.689180 1466525 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1225 12:49:39.689188 1466525 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1225 12:49:39.689198 1466525 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1225 12:49:39.689208 1466525 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1225 12:49:39.689222 1466525 command_runner.go:130] > # log_size_max = -1
	I1225 12:49:39.689232 1466525 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1225 12:49:39.689239 1466525 command_runner.go:130] > # log_to_journald = false
	I1225 12:49:39.689245 1466525 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1225 12:49:39.689253 1466525 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1225 12:49:39.689259 1466525 command_runner.go:130] > # Path to directory for container attach sockets.
	I1225 12:49:39.689267 1466525 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1225 12:49:39.689273 1466525 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1225 12:49:39.689280 1466525 command_runner.go:130] > # bind_mount_prefix = ""
	I1225 12:49:39.689285 1466525 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1225 12:49:39.689292 1466525 command_runner.go:130] > # read_only = false
	I1225 12:49:39.689298 1466525 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1225 12:49:39.689307 1466525 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1225 12:49:39.689317 1466525 command_runner.go:130] > # live configuration reload.
	I1225 12:49:39.689324 1466525 command_runner.go:130] > # log_level = "info"
	I1225 12:49:39.689330 1466525 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1225 12:49:39.689338 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:49:39.689343 1466525 command_runner.go:130] > # log_filter = ""
	I1225 12:49:39.689352 1466525 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1225 12:49:39.689361 1466525 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1225 12:49:39.689369 1466525 command_runner.go:130] > # separated by comma.
	I1225 12:49:39.689386 1466525 command_runner.go:130] > # uid_mappings = ""
	I1225 12:49:39.689401 1466525 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1225 12:49:39.689415 1466525 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1225 12:49:39.689426 1466525 command_runner.go:130] > # separated by comma.
	I1225 12:49:39.689437 1466525 command_runner.go:130] > # gid_mappings = ""
	I1225 12:49:39.689451 1466525 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1225 12:49:39.689465 1466525 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1225 12:49:39.689478 1466525 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1225 12:49:39.689486 1466525 command_runner.go:130] > # minimum_mappable_uid = -1
	I1225 12:49:39.689493 1466525 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1225 12:49:39.689507 1466525 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1225 12:49:39.689517 1466525 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1225 12:49:39.689521 1466525 command_runner.go:130] > # minimum_mappable_gid = -1
	I1225 12:49:39.689528 1466525 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1225 12:49:39.689537 1466525 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1225 12:49:39.689543 1466525 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1225 12:49:39.689551 1466525 command_runner.go:130] > # ctr_stop_timeout = 30
	I1225 12:49:39.689556 1466525 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1225 12:49:39.689563 1466525 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1225 12:49:39.689570 1466525 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1225 12:49:39.689575 1466525 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1225 12:49:39.689583 1466525 command_runner.go:130] > drop_infra_ctr = false
	I1225 12:49:39.689589 1466525 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1225 12:49:39.689598 1466525 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1225 12:49:39.689606 1466525 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1225 12:49:39.689613 1466525 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1225 12:49:39.689620 1466525 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1225 12:49:39.689628 1466525 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1225 12:49:39.689635 1466525 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1225 12:49:39.689646 1466525 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1225 12:49:39.689653 1466525 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1225 12:49:39.689659 1466525 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1225 12:49:39.689667 1466525 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1225 12:49:39.689679 1466525 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1225 12:49:39.689686 1466525 command_runner.go:130] > # default_runtime = "runc"
	I1225 12:49:39.689692 1466525 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1225 12:49:39.689702 1466525 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1225 12:49:39.689711 1466525 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1225 12:49:39.689718 1466525 command_runner.go:130] > # creation as a file is not desired either.
	I1225 12:49:39.689727 1466525 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1225 12:49:39.689734 1466525 command_runner.go:130] > # the hostname is being managed dynamically.
	I1225 12:49:39.689739 1466525 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1225 12:49:39.689743 1466525 command_runner.go:130] > # ]
	I1225 12:49:39.689752 1466525 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1225 12:49:39.689760 1466525 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1225 12:49:39.689770 1466525 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1225 12:49:39.689783 1466525 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1225 12:49:39.689789 1466525 command_runner.go:130] > #
	I1225 12:49:39.689797 1466525 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1225 12:49:39.689805 1466525 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1225 12:49:39.689812 1466525 command_runner.go:130] > #  runtime_type = "oci"
	I1225 12:49:39.689818 1466525 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1225 12:49:39.689825 1466525 command_runner.go:130] > #  privileged_without_host_devices = false
	I1225 12:49:39.689834 1466525 command_runner.go:130] > #  allowed_annotations = []
	I1225 12:49:39.689841 1466525 command_runner.go:130] > # Where:
	I1225 12:49:39.689846 1466525 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1225 12:49:39.689855 1466525 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1225 12:49:39.689864 1466525 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1225 12:49:39.689872 1466525 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1225 12:49:39.689879 1466525 command_runner.go:130] > #   in $PATH.
	I1225 12:49:39.689892 1466525 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1225 12:49:39.689899 1466525 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1225 12:49:39.689908 1466525 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1225 12:49:39.689914 1466525 command_runner.go:130] > #   state.
	I1225 12:49:39.689923 1466525 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1225 12:49:39.689932 1466525 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1225 12:49:39.689942 1466525 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1225 12:49:39.689950 1466525 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1225 12:49:39.689957 1466525 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1225 12:49:39.689969 1466525 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1225 12:49:39.689977 1466525 command_runner.go:130] > #   The currently recognized values are:
	I1225 12:49:39.689986 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1225 12:49:39.689996 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1225 12:49:39.690004 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1225 12:49:39.690013 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1225 12:49:39.690022 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1225 12:49:39.690031 1466525 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1225 12:49:39.690038 1466525 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1225 12:49:39.690047 1466525 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1225 12:49:39.690054 1466525 command_runner.go:130] > #   should be moved to the container's cgroup
	I1225 12:49:39.690062 1466525 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1225 12:49:39.690067 1466525 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1225 12:49:39.690077 1466525 command_runner.go:130] > runtime_type = "oci"
	I1225 12:49:39.690085 1466525 command_runner.go:130] > runtime_root = "/run/runc"
	I1225 12:49:39.690089 1466525 command_runner.go:130] > runtime_config_path = ""
	I1225 12:49:39.690100 1466525 command_runner.go:130] > monitor_path = ""
	I1225 12:49:39.690104 1466525 command_runner.go:130] > monitor_cgroup = ""
	I1225 12:49:39.690111 1466525 command_runner.go:130] > monitor_exec_cgroup = ""
	I1225 12:49:39.690117 1466525 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1225 12:49:39.690124 1466525 command_runner.go:130] > # running containers
	I1225 12:49:39.690128 1466525 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1225 12:49:39.690137 1466525 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1225 12:49:39.690193 1466525 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1225 12:49:39.690203 1466525 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1225 12:49:39.690208 1466525 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1225 12:49:39.690213 1466525 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1225 12:49:39.690217 1466525 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1225 12:49:39.690225 1466525 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1225 12:49:39.690230 1466525 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1225 12:49:39.690237 1466525 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1225 12:49:39.690246 1466525 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1225 12:49:39.690255 1466525 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1225 12:49:39.690264 1466525 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1225 12:49:39.690274 1466525 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1225 12:49:39.690284 1466525 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1225 12:49:39.690294 1466525 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1225 12:49:39.690307 1466525 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1225 12:49:39.690320 1466525 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1225 12:49:39.690329 1466525 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1225 12:49:39.690339 1466525 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1225 12:49:39.690346 1466525 command_runner.go:130] > # Example:
	I1225 12:49:39.690351 1466525 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1225 12:49:39.690358 1466525 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1225 12:49:39.690365 1466525 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1225 12:49:39.690377 1466525 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1225 12:49:39.690388 1466525 command_runner.go:130] > # cpuset = 0
	I1225 12:49:39.690395 1466525 command_runner.go:130] > # cpushares = "0-1"
	I1225 12:49:39.690406 1466525 command_runner.go:130] > # Where:
	I1225 12:49:39.690425 1466525 command_runner.go:130] > # The workload name is workload-type.
	I1225 12:49:39.690451 1466525 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1225 12:49:39.690466 1466525 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1225 12:49:39.690480 1466525 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1225 12:49:39.690496 1466525 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1225 12:49:39.690509 1466525 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1225 12:49:39.690517 1466525 command_runner.go:130] > # 
	I1225 12:49:39.690523 1466525 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1225 12:49:39.690529 1466525 command_runner.go:130] > #
	I1225 12:49:39.690535 1466525 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1225 12:49:39.690541 1466525 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1225 12:49:39.690547 1466525 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1225 12:49:39.690556 1466525 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1225 12:49:39.690562 1466525 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1225 12:49:39.690569 1466525 command_runner.go:130] > [crio.image]
	I1225 12:49:39.690575 1466525 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1225 12:49:39.690582 1466525 command_runner.go:130] > # default_transport = "docker://"
	I1225 12:49:39.690588 1466525 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1225 12:49:39.690600 1466525 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1225 12:49:39.690608 1466525 command_runner.go:130] > # global_auth_file = ""
	I1225 12:49:39.690614 1466525 command_runner.go:130] > # The image used to instantiate infra containers.
	I1225 12:49:39.690622 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:49:39.690629 1466525 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1225 12:49:39.690639 1466525 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1225 12:49:39.690648 1466525 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1225 12:49:39.690655 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:49:39.690662 1466525 command_runner.go:130] > # pause_image_auth_file = ""
	I1225 12:49:39.690669 1466525 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1225 12:49:39.690677 1466525 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1225 12:49:39.690686 1466525 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1225 12:49:39.690694 1466525 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1225 12:49:39.690701 1466525 command_runner.go:130] > # pause_command = "/pause"
	I1225 12:49:39.690707 1466525 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1225 12:49:39.690716 1466525 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1225 12:49:39.690722 1466525 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1225 12:49:39.690728 1466525 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1225 12:49:39.690737 1466525 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1225 12:49:39.690741 1466525 command_runner.go:130] > # signature_policy = ""
	I1225 12:49:39.690747 1466525 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1225 12:49:39.690753 1466525 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1225 12:49:39.690757 1466525 command_runner.go:130] > # changing them here.
	I1225 12:49:39.690761 1466525 command_runner.go:130] > # insecure_registries = [
	I1225 12:49:39.690764 1466525 command_runner.go:130] > # ]
	I1225 12:49:39.690771 1466525 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1225 12:49:39.690776 1466525 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1225 12:49:39.690780 1466525 command_runner.go:130] > # image_volumes = "mkdir"
	I1225 12:49:39.690785 1466525 command_runner.go:130] > # Temporary directory to use for storing big files
	I1225 12:49:39.690789 1466525 command_runner.go:130] > # big_files_temporary_dir = ""
	I1225 12:49:39.690795 1466525 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1225 12:49:39.690798 1466525 command_runner.go:130] > # CNI plugins.
	I1225 12:49:39.690802 1466525 command_runner.go:130] > [crio.network]
	I1225 12:49:39.690808 1466525 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1225 12:49:39.690813 1466525 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1225 12:49:39.690817 1466525 command_runner.go:130] > # cni_default_network = ""
	I1225 12:49:39.690825 1466525 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1225 12:49:39.690834 1466525 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1225 12:49:39.690840 1466525 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1225 12:49:39.690846 1466525 command_runner.go:130] > # plugin_dirs = [
	I1225 12:49:39.690851 1466525 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1225 12:49:39.690857 1466525 command_runner.go:130] > # ]
	I1225 12:49:39.690863 1466525 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1225 12:49:39.690869 1466525 command_runner.go:130] > [crio.metrics]
	I1225 12:49:39.690877 1466525 command_runner.go:130] > # Globally enable or disable metrics support.
	I1225 12:49:39.690887 1466525 command_runner.go:130] > enable_metrics = true
	I1225 12:49:39.690895 1466525 command_runner.go:130] > # Specify enabled metrics collectors.
	I1225 12:49:39.690900 1466525 command_runner.go:130] > # Per default all metrics are enabled.
	I1225 12:49:39.690908 1466525 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1225 12:49:39.690917 1466525 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1225 12:49:39.690925 1466525 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1225 12:49:39.690932 1466525 command_runner.go:130] > # metrics_collectors = [
	I1225 12:49:39.690936 1466525 command_runner.go:130] > # 	"operations",
	I1225 12:49:39.690943 1466525 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1225 12:49:39.690951 1466525 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1225 12:49:39.690958 1466525 command_runner.go:130] > # 	"operations_errors",
	I1225 12:49:39.690963 1466525 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1225 12:49:39.690970 1466525 command_runner.go:130] > # 	"image_pulls_by_name",
	I1225 12:49:39.690975 1466525 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1225 12:49:39.690981 1466525 command_runner.go:130] > # 	"image_pulls_failures",
	I1225 12:49:39.690986 1466525 command_runner.go:130] > # 	"image_pulls_successes",
	I1225 12:49:39.690993 1466525 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1225 12:49:39.690997 1466525 command_runner.go:130] > # 	"image_layer_reuse",
	I1225 12:49:39.691004 1466525 command_runner.go:130] > # 	"containers_oom_total",
	I1225 12:49:39.691008 1466525 command_runner.go:130] > # 	"containers_oom",
	I1225 12:49:39.691015 1466525 command_runner.go:130] > # 	"processes_defunct",
	I1225 12:49:39.691019 1466525 command_runner.go:130] > # 	"operations_total",
	I1225 12:49:39.691024 1466525 command_runner.go:130] > # 	"operations_latency_seconds",
	I1225 12:49:39.691031 1466525 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1225 12:49:39.691036 1466525 command_runner.go:130] > # 	"operations_errors_total",
	I1225 12:49:39.691043 1466525 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1225 12:49:39.691048 1466525 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1225 12:49:39.691057 1466525 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1225 12:49:39.691065 1466525 command_runner.go:130] > # 	"image_pulls_success_total",
	I1225 12:49:39.691072 1466525 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1225 12:49:39.691077 1466525 command_runner.go:130] > # 	"containers_oom_count_total",
	I1225 12:49:39.691083 1466525 command_runner.go:130] > # ]
	I1225 12:49:39.691088 1466525 command_runner.go:130] > # The port on which the metrics server will listen.
	I1225 12:49:39.691099 1466525 command_runner.go:130] > # metrics_port = 9090
	I1225 12:49:39.691104 1466525 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1225 12:49:39.691111 1466525 command_runner.go:130] > # metrics_socket = ""
	I1225 12:49:39.691116 1466525 command_runner.go:130] > # The certificate for the secure metrics server.
	I1225 12:49:39.691124 1466525 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1225 12:49:39.691133 1466525 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1225 12:49:39.691140 1466525 command_runner.go:130] > # certificate on any modification event.
	I1225 12:49:39.691144 1466525 command_runner.go:130] > # metrics_cert = ""
	I1225 12:49:39.691152 1466525 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1225 12:49:39.691160 1466525 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1225 12:49:39.691167 1466525 command_runner.go:130] > # metrics_key = ""
	I1225 12:49:39.691176 1466525 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1225 12:49:39.691186 1466525 command_runner.go:130] > [crio.tracing]
	I1225 12:49:39.691192 1466525 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1225 12:49:39.691199 1466525 command_runner.go:130] > # enable_tracing = false
	I1225 12:49:39.691205 1466525 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1225 12:49:39.691213 1466525 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1225 12:49:39.691221 1466525 command_runner.go:130] > # Number of samples to collect per million spans.
	I1225 12:49:39.691225 1466525 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1225 12:49:39.691234 1466525 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1225 12:49:39.691240 1466525 command_runner.go:130] > [crio.stats]
	I1225 12:49:39.691250 1466525 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1225 12:49:39.691259 1466525 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1225 12:49:39.691266 1466525 command_runner.go:130] > # stats_collection_period = 0
	I1225 12:49:39.691371 1466525 cni.go:84] Creating CNI manager for ""
	I1225 12:49:39.691395 1466525 cni.go:136] 3 nodes found, recommending kindnet
	I1225 12:49:39.691426 1466525 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 12:49:39.691457 1466525 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.21 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-544936 NodeName:multinode-544936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 12:49:39.691608 1466525 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-544936"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 12:49:39.691741 1466525 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-544936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 12:49:39.691831 1466525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 12:49:39.701524 1466525 command_runner.go:130] > kubeadm
	I1225 12:49:39.701544 1466525 command_runner.go:130] > kubectl
	I1225 12:49:39.701552 1466525 command_runner.go:130] > kubelet
	I1225 12:49:39.701789 1466525 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 12:49:39.701882 1466525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 12:49:39.711430 1466525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1225 12:49:39.727938 1466525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 12:49:39.744079 1466525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1225 12:49:39.760694 1466525 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I1225 12:49:39.764451 1466525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 12:49:39.775916 1466525 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936 for IP: 192.168.39.21
	I1225 12:49:39.775965 1466525 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:49:39.776164 1466525 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 12:49:39.776205 1466525 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 12:49:39.776266 1466525 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key
	I1225 12:49:39.964966 1466525 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.key.86be2464
	I1225 12:49:39.965060 1466525 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.key
	I1225 12:49:39.965073 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1225 12:49:39.965086 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1225 12:49:39.965096 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1225 12:49:39.965109 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1225 12:49:39.965118 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1225 12:49:39.965129 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1225 12:49:39.965138 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1225 12:49:39.965148 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1225 12:49:39.965244 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 12:49:39.965301 1466525 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 12:49:39.965313 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 12:49:39.965334 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 12:49:39.965370 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 12:49:39.965416 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 12:49:39.965465 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 12:49:39.965498 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:49:39.965513 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem -> /usr/share/ca-certificates/1449797.pem
	I1225 12:49:39.965527 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> /usr/share/ca-certificates/14497972.pem
	I1225 12:49:39.966321 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 12:49:39.991637 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1225 12:49:40.017255 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 12:49:40.042013 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 12:49:40.066980 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 12:49:40.092304 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 12:49:40.117470 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 12:49:40.142153 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 12:49:40.164820 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 12:49:40.189115 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 12:49:40.215188 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 12:49:40.240949 1466525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 12:49:40.259181 1466525 ssh_runner.go:195] Run: openssl version
	I1225 12:49:40.265409 1466525 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1225 12:49:40.265837 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 12:49:40.275883 1466525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:49:40.280365 1466525 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:49:40.280400 1466525 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:49:40.280444 1466525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:49:40.285588 1466525 command_runner.go:130] > b5213941
	I1225 12:49:40.285856 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 12:49:40.295365 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 12:49:40.304864 1466525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 12:49:40.309184 1466525 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 12:49:40.309309 1466525 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 12:49:40.309372 1466525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 12:49:40.314348 1466525 command_runner.go:130] > 51391683
	I1225 12:49:40.314741 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 12:49:40.324571 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 12:49:40.334356 1466525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 12:49:40.338843 1466525 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 12:49:40.338933 1466525 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 12:49:40.338986 1466525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 12:49:40.344198 1466525 command_runner.go:130] > 3ec20f2e
	I1225 12:49:40.344403 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 12:49:40.353889 1466525 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 12:49:40.358205 1466525 command_runner.go:130] > ca.crt
	I1225 12:49:40.358232 1466525 command_runner.go:130] > ca.key
	I1225 12:49:40.358260 1466525 command_runner.go:130] > healthcheck-client.crt
	I1225 12:49:40.358268 1466525 command_runner.go:130] > healthcheck-client.key
	I1225 12:49:40.358276 1466525 command_runner.go:130] > peer.crt
	I1225 12:49:40.358282 1466525 command_runner.go:130] > peer.key
	I1225 12:49:40.358288 1466525 command_runner.go:130] > server.crt
	I1225 12:49:40.358294 1466525 command_runner.go:130] > server.key
	I1225 12:49:40.358392 1466525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 12:49:40.364230 1466525 command_runner.go:130] > Certificate will not expire
	I1225 12:49:40.364497 1466525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 12:49:40.370234 1466525 command_runner.go:130] > Certificate will not expire
	I1225 12:49:40.370322 1466525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 12:49:40.376288 1466525 command_runner.go:130] > Certificate will not expire
	I1225 12:49:40.376425 1466525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 12:49:40.382148 1466525 command_runner.go:130] > Certificate will not expire
	I1225 12:49:40.382357 1466525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 12:49:40.388328 1466525 command_runner.go:130] > Certificate will not expire
	I1225 12:49:40.388405 1466525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 12:49:40.394127 1466525 command_runner.go:130] > Certificate will not expire
	I1225 12:49:40.394390 1466525 kubeadm.go:404] StartCluster: {Name:multinode-544936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.54 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:49:40.394548 1466525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 12:49:40.394641 1466525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 12:49:40.433039 1466525 cri.go:89] found id: ""
	I1225 12:49:40.433207 1466525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 12:49:40.442873 1466525 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1225 12:49:40.442899 1466525 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1225 12:49:40.442909 1466525 command_runner.go:130] > /var/lib/minikube/etcd:
	I1225 12:49:40.442915 1466525 command_runner.go:130] > member
	I1225 12:49:40.443002 1466525 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 12:49:40.443021 1466525 kubeadm.go:636] restartCluster start
	I1225 12:49:40.443076 1466525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 12:49:40.451948 1466525 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:40.452758 1466525 kubeconfig.go:92] found "multinode-544936" server: "https://192.168.39.21:8443"
	I1225 12:49:40.453252 1466525 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:49:40.453522 1466525 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:49:40.454123 1466525 cert_rotation.go:137] Starting client certificate rotation controller
	I1225 12:49:40.454373 1466525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 12:49:40.463261 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:40.463327 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:40.474500 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:40.964091 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:40.964228 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:40.975815 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:41.463390 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:41.463490 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:41.475824 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:41.963423 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:41.963552 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:41.975198 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:42.463360 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:42.463468 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:42.474405 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:42.964077 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:42.964180 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:42.975265 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:43.464021 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:43.464131 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:43.475463 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:43.963620 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:43.963718 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:43.975162 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:44.464157 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:44.464299 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:44.475561 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:44.964178 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:44.964294 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:44.975692 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:45.464342 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:45.464450 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:45.475442 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:45.964125 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:45.964211 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:45.976148 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:46.463696 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:46.463795 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:46.475240 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:46.963807 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:46.963915 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:46.975095 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:47.464322 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:47.464409 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:47.475958 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:47.963495 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:47.963621 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:47.974782 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:48.463356 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:48.463454 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:48.475878 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:48.963420 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:48.963522 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:48.975001 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:49.464008 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:49.464138 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:49.475773 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:49.963373 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:49.963473 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:49.974250 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:50.464087 1466525 api_server.go:166] Checking apiserver status ...
	I1225 12:49:50.464210 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 12:49:50.475950 1466525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 12:49:50.475986 1466525 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 12:49:50.475999 1466525 kubeadm.go:1135] stopping kube-system containers ...
	I1225 12:49:50.476013 1466525 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 12:49:50.476089 1466525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 12:49:50.516461 1466525 cri.go:89] found id: ""
	I1225 12:49:50.516552 1466525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 12:49:50.531842 1466525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 12:49:50.540813 1466525 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1225 12:49:50.540840 1466525 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1225 12:49:50.540847 1466525 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1225 12:49:50.540854 1466525 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 12:49:50.540888 1466525 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 12:49:50.540948 1466525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 12:49:50.549591 1466525 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 12:49:50.549621 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 12:49:50.667484 1466525 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 12:49:50.667978 1466525 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1225 12:49:50.668403 1466525 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1225 12:49:50.668840 1466525 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1225 12:49:50.669448 1466525 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1225 12:49:50.669973 1466525 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1225 12:49:50.670973 1466525 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1225 12:49:50.671445 1466525 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1225 12:49:50.671859 1466525 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1225 12:49:50.672288 1466525 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1225 12:49:50.672763 1466525 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1225 12:49:50.673758 1466525 command_runner.go:130] > [certs] Using the existing "sa" key
	I1225 12:49:50.674748 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 12:49:50.725642 1466525 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 12:49:51.010085 1466525 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 12:49:51.266538 1466525 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 12:49:51.410260 1466525 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 12:49:51.720534 1466525 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 12:49:51.724439 1466525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.049663479s)
	I1225 12:49:51.724478 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 12:49:51.790086 1466525 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 12:49:51.791380 1466525 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 12:49:51.791406 1466525 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1225 12:49:51.910677 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 12:49:51.972559 1466525 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 12:49:51.972588 1466525 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 12:49:51.978508 1466525 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 12:49:51.979698 1466525 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 12:49:51.982358 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 12:49:52.041145 1466525 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 12:49:52.045322 1466525 api_server.go:52] waiting for apiserver process to appear ...
	I1225 12:49:52.045414 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 12:49:52.545655 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 12:49:53.046354 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 12:49:53.545987 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 12:49:54.046480 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 12:49:54.546494 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 12:49:54.570221 1466525 command_runner.go:130] > 1095
	I1225 12:49:54.570317 1466525 api_server.go:72] duration metric: took 2.52499634s to wait for apiserver process to appear ...
	I1225 12:49:54.570332 1466525 api_server.go:88] waiting for apiserver healthz status ...
	I1225 12:49:54.570357 1466525 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I1225 12:49:58.431969 1466525 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 12:49:58.432002 1466525 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 12:49:58.432019 1466525 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I1225 12:49:58.556193 1466525 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 12:49:58.556234 1466525 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 12:49:58.571373 1466525 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I1225 12:49:58.577997 1466525 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 12:49:58.578030 1466525 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 12:49:59.070589 1466525 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I1225 12:49:59.076201 1466525 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 12:49:59.076232 1466525 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 12:49:59.571140 1466525 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I1225 12:49:59.595080 1466525 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 12:49:59.595128 1466525 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 12:50:00.070559 1466525 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I1225 12:50:00.075781 1466525 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I1225 12:50:00.075924 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/version
	I1225 12:50:00.075938 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:00.075948 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:00.075955 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:00.085391 1466525 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1225 12:50:00.085415 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:00.085422 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:00 GMT
	I1225 12:50:00.085428 1466525 round_trippers.go:580]     Audit-Id: 1ac99863-face-40e7-b8ea-5abd4b6c51dd
	I1225 12:50:00.085433 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:00.085438 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:00.085443 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:00.085448 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:00.085453 1466525 round_trippers.go:580]     Content-Length: 264
	I1225 12:50:00.085480 1466525 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1225 12:50:00.085568 1466525 api_server.go:141] control plane version: v1.28.4
	I1225 12:50:00.085595 1466525 api_server.go:131] duration metric: took 5.51525472s to wait for apiserver health ...
	I1225 12:50:00.085608 1466525 cni.go:84] Creating CNI manager for ""
	I1225 12:50:00.085622 1466525 cni.go:136] 3 nodes found, recommending kindnet
	I1225 12:50:00.087939 1466525 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1225 12:50:00.089629 1466525 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1225 12:50:00.103854 1466525 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1225 12:50:00.103886 1466525 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1225 12:50:00.103897 1466525 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1225 12:50:00.103907 1466525 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1225 12:50:00.103916 1466525 command_runner.go:130] > Access: 2023-12-25 12:49:25.300350221 +0000
	I1225 12:50:00.103924 1466525 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I1225 12:50:00.103940 1466525 command_runner.go:130] > Change: 2023-12-25 12:49:23.350350221 +0000
	I1225 12:50:00.103946 1466525 command_runner.go:130] >  Birth: -
	I1225 12:50:00.104006 1466525 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1225 12:50:00.104023 1466525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1225 12:50:00.141208 1466525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1225 12:50:01.502594 1466525 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1225 12:50:01.509649 1466525 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1225 12:50:01.513951 1466525 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1225 12:50:01.530164 1466525 command_runner.go:130] > daemonset.apps/kindnet configured
	I1225 12:50:01.532922 1466525 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.391662456s)
	I1225 12:50:01.532963 1466525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 12:50:01.533159 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:50:01.533178 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:01.533192 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:01.533203 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:01.537264 1466525 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:50:01.537297 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:01.537309 1466525 round_trippers.go:580]     Audit-Id: a89916ab-4995-4fa6-93f6-4c974e0cade9
	I1225 12:50:01.537318 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:01.537327 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:01.537340 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:01.537348 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:01.537359 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:01 GMT
	I1225 12:50:01.538574 1466525 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"791"},"items":[{"metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"768","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82598 chars]
	I1225 12:50:01.542992 1466525 system_pods.go:59] 12 kube-system pods found
	I1225 12:50:01.543033 1466525 system_pods.go:61] "coredns-5dd5756b68-mg2zk" [4f4e21f4-8e73-4b81-a080-c42b6980ee3b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 12:50:01.543041 1466525 system_pods.go:61] "etcd-multinode-544936" [8dc9103e-ec1a-40f4-80f8-4f4918bb5e33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 12:50:01.543048 1466525 system_pods.go:61] "kindnet-2hjhm" [8cfe7daa-3fc7-485a-8794-117466297c5a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 12:50:01.543053 1466525 system_pods.go:61] "kindnet-7cr8v" [2136f166-f4d1-4529-a932-010126e9fc7d] Running
	I1225 12:50:01.543061 1466525 system_pods.go:61] "kindnet-mjlfm" [a8f29535-29de-4e87-a068-63a97cc46b60] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 12:50:01.543070 1466525 system_pods.go:61] "kube-apiserver-multinode-544936" [d0fda9c8-27cf-4ecc-b379-39745cb7ec19] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 12:50:01.543083 1466525 system_pods.go:61] "kube-controller-manager-multinode-544936" [e8837ba4-e0a0-4bec-a702-df5e7e9ce1c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 12:50:01.543099 1466525 system_pods.go:61] "kube-proxy-7z5x6" [304c848e-4ecf-433d-a17d-b1b33784ae08] Running
	I1225 12:50:01.543105 1466525 system_pods.go:61] "kube-proxy-gkxgw" [d14fbb1d-1200-463f-bd2b-17943371448c] Running
	I1225 12:50:01.543112 1466525 system_pods.go:61] "kube-proxy-k4jc7" [14699a0d-601b-4bc3-9584-7ac67822a926] Running
	I1225 12:50:01.543118 1466525 system_pods.go:61] "kube-scheduler-multinode-544936" [e8027489-26d3-44c3-aeea-286e6689e75e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 12:50:01.543132 1466525 system_pods.go:61] "storage-provisioner" [897346ba-f39d-4771-913e-535bff9ca6b7] Running
	I1225 12:50:01.543141 1466525 system_pods.go:74] duration metric: took 10.170848ms to wait for pod list to return data ...
	I1225 12:50:01.543151 1466525 node_conditions.go:102] verifying NodePressure condition ...
	I1225 12:50:01.543216 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes
	I1225 12:50:01.543225 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:01.543233 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:01.543239 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:01.546757 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:01.546778 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:01.546785 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:01.546791 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:01 GMT
	I1225 12:50:01.546796 1466525 round_trippers.go:580]     Audit-Id: c6cecfdb-5333-426d-9690-026f17196bf2
	I1225 12:50:01.546802 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:01.546807 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:01.546831 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:01.547410 1466525 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"791"},"items":[{"metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"739","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16474 chars]
	I1225 12:50:01.548286 1466525 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:50:01.548314 1466525 node_conditions.go:123] node cpu capacity is 2
	I1225 12:50:01.548328 1466525 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:50:01.548334 1466525 node_conditions.go:123] node cpu capacity is 2
	I1225 12:50:01.548344 1466525 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:50:01.548356 1466525 node_conditions.go:123] node cpu capacity is 2
	I1225 12:50:01.548381 1466525 node_conditions.go:105] duration metric: took 5.203627ms to run NodePressure ...
	I1225 12:50:01.548409 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 12:50:01.789689 1466525 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1225 12:50:01.789720 1466525 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1225 12:50:01.789758 1466525 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 12:50:01.789917 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1225 12:50:01.789932 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:01.789944 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:01.789954 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:01.793672 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:01.793704 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:01.793716 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:01.793724 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:01 GMT
	I1225 12:50:01.793732 1466525 round_trippers.go:580]     Audit-Id: 670d8d50-c42e-460d-bf4a-3f0c12af71f8
	I1225 12:50:01.793739 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:01.793747 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:01.793755 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:01.794301 1466525 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"794"},"items":[{"metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"765","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I1225 12:50:01.795525 1466525 kubeadm.go:787] kubelet initialised
	I1225 12:50:01.795548 1466525 kubeadm.go:788] duration metric: took 5.780834ms waiting for restarted kubelet to initialise ...
	I1225 12:50:01.795557 1466525 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:50:01.795625 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:50:01.795634 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:01.795641 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:01.795647 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:01.799000 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:01.799026 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:01.799036 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:01 GMT
	I1225 12:50:01.799045 1466525 round_trippers.go:580]     Audit-Id: 29c98452-ae41-4ac3-8ffe-2589b907466a
	I1225 12:50:01.799057 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:01.799082 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:01.799105 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:01.799124 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:01.800907 1466525 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"794"},"items":[{"metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"768","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83167 chars]
	I1225 12:50:01.804127 1466525 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:01.804245 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:50:01.804256 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:01.804264 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:01.804270 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:01.806992 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:01.807013 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:01.807020 1466525 round_trippers.go:580]     Audit-Id: 83daf73f-f9a7-4773-868d-a253f6363543
	I1225 12:50:01.807025 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:01.807031 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:01.807036 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:01.807041 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:01.807046 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:01 GMT
	I1225 12:50:01.807277 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"768","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1225 12:50:01.807820 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:01.807838 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:01.807845 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:01.807852 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:01.809923 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:01.809938 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:01.809945 1466525 round_trippers.go:580]     Audit-Id: 01b106c0-c510-46ad-9bbc-3ba26cb692ea
	I1225 12:50:01.809950 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:01.809961 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:01.809966 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:01.809971 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:01.809976 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:01 GMT
	I1225 12:50:01.810250 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"739","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1225 12:50:01.810592 1466525 pod_ready.go:97] node "multinode-544936" hosting pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-544936" has status "Ready":"False"
	I1225 12:50:01.810624 1466525 pod_ready.go:81] duration metric: took 6.464714ms waiting for pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace to be "Ready" ...
	E1225 12:50:01.810635 1466525 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-544936" hosting pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-544936" has status "Ready":"False"
	I1225 12:50:01.810649 1466525 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:01.810719 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:50:01.810726 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:01.810735 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:01.810747 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:01.814299 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:01.814322 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:01.814332 1466525 round_trippers.go:580]     Audit-Id: 732258aa-ba10-48bc-8266-a2abb45578ab
	I1225 12:50:01.814339 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:01.814348 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:01.814355 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:01.814363 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:01.814375 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:01 GMT
	I1225 12:50:01.814521 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"765","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1225 12:50:01.814958 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:01.814979 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:01.814995 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:01.815004 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:01.819278 1466525 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:50:01.819300 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:01.819310 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:01.819318 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:01.819326 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:01.819334 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:01 GMT
	I1225 12:50:01.819345 1466525 round_trippers.go:580]     Audit-Id: 741d7b50-1e95-4dff-b3ef-a870a1b9f5e9
	I1225 12:50:01.819354 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:01.819531 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"739","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1225 12:50:01.819940 1466525 pod_ready.go:97] node "multinode-544936" hosting pod "etcd-multinode-544936" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-544936" has status "Ready":"False"
	I1225 12:50:01.819970 1466525 pod_ready.go:81] duration metric: took 9.312621ms waiting for pod "etcd-multinode-544936" in "kube-system" namespace to be "Ready" ...
	E1225 12:50:01.819983 1466525 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-544936" hosting pod "etcd-multinode-544936" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-544936" has status "Ready":"False"
	I1225 12:50:01.820004 1466525 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:01.820083 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-544936
	I1225 12:50:01.820091 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:01.820099 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:01.820109 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:01.824206 1466525 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:50:01.824230 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:01.824240 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:01.824250 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:01.824258 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:01 GMT
	I1225 12:50:01.824266 1466525 round_trippers.go:580]     Audit-Id: 27f367e8-b033-484f-8f44-594891c82b29
	I1225 12:50:01.824278 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:01.824288 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:01.824901 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-544936","namespace":"kube-system","uid":"d0fda9c8-27cf-4ecc-b379-39745cb7ec19","resourceVersion":"766","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.21:8443","kubernetes.io/config.hash":"b7cd9addac4657510db86c61386c4e6f","kubernetes.io/config.mirror":"b7cd9addac4657510db86c61386c4e6f","kubernetes.io/config.seen":"2023-12-25T12:39:31.216607492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1225 12:50:01.825438 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:01.825454 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:01.825461 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:01.825468 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:01.827696 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:01.827712 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:01.827718 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:01.827723 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:01.827746 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:01 GMT
	I1225 12:50:01.827753 1466525 round_trippers.go:580]     Audit-Id: e2524066-5203-42b7-92ed-760b4a2fa10e
	I1225 12:50:01.827761 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:01.827775 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:01.828230 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"739","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1225 12:50:01.828536 1466525 pod_ready.go:97] node "multinode-544936" hosting pod "kube-apiserver-multinode-544936" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-544936" has status "Ready":"False"
	I1225 12:50:01.828556 1466525 pod_ready.go:81] duration metric: took 8.541291ms waiting for pod "kube-apiserver-multinode-544936" in "kube-system" namespace to be "Ready" ...
	E1225 12:50:01.828566 1466525 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-544936" hosting pod "kube-apiserver-multinode-544936" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-544936" has status "Ready":"False"
	I1225 12:50:01.828574 1466525 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:01.828655 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-544936
	I1225 12:50:01.828669 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:01.828680 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:01.828690 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:01.831302 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:01.831320 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:01.831329 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:01.831337 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:01.831345 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:01.831360 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:01 GMT
	I1225 12:50:01.831371 1466525 round_trippers.go:580]     Audit-Id: cbf13f21-f9e8-4a24-ab14-1291dff2ab38
	I1225 12:50:01.831382 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:01.831569 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-544936","namespace":"kube-system","uid":"e8837ba4-e0a0-4bec-a702-df5e7e9ce1c0","resourceVersion":"760","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dcbd1114ea0bb0064cc87c1b2d706f29","kubernetes.io/config.mirror":"dcbd1114ea0bb0064cc87c1b2d706f29","kubernetes.io/config.seen":"2023-12-25T12:39:31.216608577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I1225 12:50:01.933442 1466525 request.go:629] Waited for 101.333219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:01.933505 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:01.933510 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:01.933519 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:01.933525 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:01.936200 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:01.936229 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:01.936248 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:01.936257 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:01.936265 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:01.936274 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:01.936282 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:01 GMT
	I1225 12:50:01.936289 1466525 round_trippers.go:580]     Audit-Id: 689ed02d-faae-4c68-bfb2-568785eff904
	I1225 12:50:01.936440 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"739","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1225 12:50:01.936953 1466525 pod_ready.go:97] node "multinode-544936" hosting pod "kube-controller-manager-multinode-544936" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-544936" has status "Ready":"False"
	I1225 12:50:01.936978 1466525 pod_ready.go:81] duration metric: took 108.394262ms waiting for pod "kube-controller-manager-multinode-544936" in "kube-system" namespace to be "Ready" ...
	E1225 12:50:01.936991 1466525 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-544936" hosting pod "kube-controller-manager-multinode-544936" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-544936" has status "Ready":"False"
	I1225 12:50:01.937000 1466525 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7z5x6" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:02.133135 1466525 request.go:629] Waited for 196.061035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z5x6
	I1225 12:50:02.133217 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z5x6
	I1225 12:50:02.133222 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:02.133230 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:02.133244 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:02.136246 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:02.136275 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:02.136286 1466525 round_trippers.go:580]     Audit-Id: c8873624-d7f9-410f-8370-e30a9e5756d0
	I1225 12:50:02.136294 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:02.136301 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:02.136309 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:02.136316 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:02.136324 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:02 GMT
	I1225 12:50:02.136585 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7z5x6","generateName":"kube-proxy-","namespace":"kube-system","uid":"304c848e-4ecf-433d-a17d-b1b33784ae08","resourceVersion":"507","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1225 12:50:02.333608 1466525 request.go:629] Waited for 196.441493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:50:02.333710 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:50:02.333718 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:02.333731 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:02.333740 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:02.336665 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:02.336715 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:02.336726 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:02.336734 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:02.336742 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:02 GMT
	I1225 12:50:02.336750 1466525 round_trippers.go:580]     Audit-Id: a74bc9d0-e11b-444f-8e30-060d932c1648
	I1225 12:50:02.336758 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:02.336769 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:02.336911 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"737","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_42_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4236 chars]
	I1225 12:50:02.337286 1466525 pod_ready.go:92] pod "kube-proxy-7z5x6" in "kube-system" namespace has status "Ready":"True"
	I1225 12:50:02.337312 1466525 pod_ready.go:81] duration metric: took 400.302938ms waiting for pod "kube-proxy-7z5x6" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:02.337324 1466525 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gkxgw" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:02.533210 1466525 request.go:629] Waited for 195.761541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkxgw
	I1225 12:50:02.533288 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkxgw
	I1225 12:50:02.533296 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:02.533307 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:02.533315 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:02.536550 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:02.536579 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:02.536589 1466525 round_trippers.go:580]     Audit-Id: 59153303-d3f0-4ef2-a3d7-1e623af8c14a
	I1225 12:50:02.536595 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:02.536600 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:02.536605 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:02.536613 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:02.536618 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:02 GMT
	I1225 12:50:02.536794 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gkxgw","generateName":"kube-proxy-","namespace":"kube-system","uid":"d14fbb1d-1200-463f-bd2b-17943371448c","resourceVersion":"714","creationTimestamp":"2023-12-25T12:41:20Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1225 12:50:02.733824 1466525 request.go:629] Waited for 196.369359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m03
	I1225 12:50:02.733910 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m03
	I1225 12:50:02.733925 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:02.733951 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:02.733960 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:02.737697 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:02.737743 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:02.737752 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:02.737759 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:02.737766 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:02.737773 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:02.737781 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:02 GMT
	I1225 12:50:02.737789 1466525 round_trippers.go:580]     Audit-Id: 06ccd2a5-2507-4fc2-bb0e-52a0af5200d0
	I1225 12:50:02.738502 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m03","uid":"3744762d-9d11-4193-82ab-cd70245fefca","resourceVersion":"733","creationTimestamp":"2023-12-25T12:42:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_42_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:42:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 4084 chars]
	I1225 12:50:02.738823 1466525 pod_ready.go:92] pod "kube-proxy-gkxgw" in "kube-system" namespace has status "Ready":"True"
	I1225 12:50:02.738844 1466525 pod_ready.go:81] duration metric: took 401.500289ms waiting for pod "kube-proxy-gkxgw" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:02.738855 1466525 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k4jc7" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:02.933999 1466525 request.go:629] Waited for 195.061187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4jc7
	I1225 12:50:02.934120 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4jc7
	I1225 12:50:02.934133 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:02.934146 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:02.934160 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:02.937482 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:02.937523 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:02.937531 1466525 round_trippers.go:580]     Audit-Id: 512d2eee-833d-451c-ade9-3a8340db9f46
	I1225 12:50:02.937536 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:02.937541 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:02.937546 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:02.937551 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:02.937556 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:02 GMT
	I1225 12:50:02.938391 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k4jc7","generateName":"kube-proxy-","namespace":"kube-system","uid":"14699a0d-601b-4bc3-9584-7ac67822a926","resourceVersion":"790","creationTimestamp":"2023-12-25T12:39:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1225 12:50:03.133201 1466525 request.go:629] Waited for 194.31836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:03.133295 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:03.133302 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:03.133316 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:03.133327 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:03.136291 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:03.136326 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:03.136335 1466525 round_trippers.go:580]     Audit-Id: fdb3bf94-fcea-4062-992a-65404b9fd7e8
	I1225 12:50:03.136341 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:03.136346 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:03.136351 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:03.136357 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:03.136362 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:03 GMT
	I1225 12:50:03.136936 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"739","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1225 12:50:03.137291 1466525 pod_ready.go:97] node "multinode-544936" hosting pod "kube-proxy-k4jc7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-544936" has status "Ready":"False"
	I1225 12:50:03.137311 1466525 pod_ready.go:81] duration metric: took 398.450908ms waiting for pod "kube-proxy-k4jc7" in "kube-system" namespace to be "Ready" ...
	E1225 12:50:03.137319 1466525 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-544936" hosting pod "kube-proxy-k4jc7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-544936" has status "Ready":"False"
	I1225 12:50:03.137328 1466525 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:03.333222 1466525 request.go:629] Waited for 195.812992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-544936
	I1225 12:50:03.333288 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-544936
	I1225 12:50:03.333293 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:03.333301 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:03.333308 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:03.336032 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:03.336054 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:03.336061 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:03.336066 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:03.336071 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:03.336076 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:03 GMT
	I1225 12:50:03.336082 1466525 round_trippers.go:580]     Audit-Id: 218bd5ca-576f-40be-97b0-19d73c03ae2f
	I1225 12:50:03.336087 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:03.336313 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-544936","namespace":"kube-system","uid":"e8027489-26d3-44c3-aeea-286e6689e75e","resourceVersion":"761","creationTimestamp":"2023-12-25T12:39:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0d8721061e771e9dc39fa5394fc12b4b","kubernetes.io/config.mirror":"0d8721061e771e9dc39fa5394fc12b4b","kubernetes.io/config.seen":"2023-12-25T12:39:22.819404471Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I1225 12:50:03.533119 1466525 request.go:629] Waited for 196.296344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:03.533217 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:03.533222 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:03.533241 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:03.533247 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:03.536043 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:03.536073 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:03.536084 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:03.536093 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:03.536105 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:03 GMT
	I1225 12:50:03.536113 1466525 round_trippers.go:580]     Audit-Id: fbca34f8-7044-4794-b997-0cc680730ec8
	I1225 12:50:03.536121 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:03.536129 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:03.536298 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"739","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1225 12:50:03.536777 1466525 pod_ready.go:97] node "multinode-544936" hosting pod "kube-scheduler-multinode-544936" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-544936" has status "Ready":"False"
	I1225 12:50:03.536801 1466525 pod_ready.go:81] duration metric: took 399.466795ms waiting for pod "kube-scheduler-multinode-544936" in "kube-system" namespace to be "Ready" ...
	E1225 12:50:03.536812 1466525 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-544936" hosting pod "kube-scheduler-multinode-544936" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-544936" has status "Ready":"False"
	I1225 12:50:03.536823 1466525 pod_ready.go:38] duration metric: took 1.741257965s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:50:03.536844 1466525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 12:50:03.547848 1466525 command_runner.go:130] > -16
	I1225 12:50:03.548144 1466525 ops.go:34] apiserver oom_adj: -16
	I1225 12:50:03.548175 1466525 kubeadm.go:640] restartCluster took 23.105136723s
	I1225 12:50:03.548184 1466525 kubeadm.go:406] StartCluster complete in 23.153819296s
	I1225 12:50:03.548202 1466525 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:50:03.548306 1466525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:50:03.549049 1466525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:50:03.549277 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 12:50:03.549471 1466525 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 12:50:03.549633 1466525 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:50:03.552423 1466525 out.go:177] * Enabled addons: 
	I1225 12:50:03.549654 1466525 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:50:03.549975 1466525 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:50:03.553794 1466525 addons.go:508] enable addons completed in 4.331194ms: enabled=[]
	I1225 12:50:03.554117 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1225 12:50:03.554132 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:03.554140 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:03.554145 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:03.556861 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:03.556878 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:03.556884 1466525 round_trippers.go:580]     Content-Length: 291
	I1225 12:50:03.556889 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:03 GMT
	I1225 12:50:03.556894 1466525 round_trippers.go:580]     Audit-Id: 2366e8fc-ae1a-4dae-a071-e40f24021582
	I1225 12:50:03.556899 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:03.556904 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:03.556909 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:03.556914 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:03.557048 1466525 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1deabb96-9bfd-47c0-8cbc-978c4199f86b","resourceVersion":"793","creationTimestamp":"2023-12-25T12:39:31Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1225 12:50:03.557220 1466525 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-544936" context rescaled to 1 replicas
	I1225 12:50:03.557252 1466525 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 12:50:03.558711 1466525 out.go:177] * Verifying Kubernetes components...
	I1225 12:50:03.559934 1466525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:50:03.655287 1466525 command_runner.go:130] > apiVersion: v1
	I1225 12:50:03.655313 1466525 command_runner.go:130] > data:
	I1225 12:50:03.655318 1466525 command_runner.go:130] >   Corefile: |
	I1225 12:50:03.655322 1466525 command_runner.go:130] >     .:53 {
	I1225 12:50:03.655329 1466525 command_runner.go:130] >         log
	I1225 12:50:03.655335 1466525 command_runner.go:130] >         errors
	I1225 12:50:03.655339 1466525 command_runner.go:130] >         health {
	I1225 12:50:03.655343 1466525 command_runner.go:130] >            lameduck 5s
	I1225 12:50:03.655347 1466525 command_runner.go:130] >         }
	I1225 12:50:03.655352 1466525 command_runner.go:130] >         ready
	I1225 12:50:03.655359 1466525 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1225 12:50:03.655363 1466525 command_runner.go:130] >            pods insecure
	I1225 12:50:03.655372 1466525 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1225 12:50:03.655376 1466525 command_runner.go:130] >            ttl 30
	I1225 12:50:03.655380 1466525 command_runner.go:130] >         }
	I1225 12:50:03.655387 1466525 command_runner.go:130] >         prometheus :9153
	I1225 12:50:03.655392 1466525 command_runner.go:130] >         hosts {
	I1225 12:50:03.655399 1466525 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1225 12:50:03.655407 1466525 command_runner.go:130] >            fallthrough
	I1225 12:50:03.655411 1466525 command_runner.go:130] >         }
	I1225 12:50:03.655416 1466525 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1225 12:50:03.655423 1466525 command_runner.go:130] >            max_concurrent 1000
	I1225 12:50:03.655456 1466525 command_runner.go:130] >         }
	I1225 12:50:03.655468 1466525 command_runner.go:130] >         cache 30
	I1225 12:50:03.655483 1466525 command_runner.go:130] >         loop
	I1225 12:50:03.655490 1466525 command_runner.go:130] >         reload
	I1225 12:50:03.655497 1466525 command_runner.go:130] >         loadbalance
	I1225 12:50:03.655507 1466525 command_runner.go:130] >     }
	I1225 12:50:03.655514 1466525 command_runner.go:130] > kind: ConfigMap
	I1225 12:50:03.655523 1466525 command_runner.go:130] > metadata:
	I1225 12:50:03.655537 1466525 command_runner.go:130] >   creationTimestamp: "2023-12-25T12:39:31Z"
	I1225 12:50:03.655547 1466525 command_runner.go:130] >   name: coredns
	I1225 12:50:03.655555 1466525 command_runner.go:130] >   namespace: kube-system
	I1225 12:50:03.655559 1466525 command_runner.go:130] >   resourceVersion: "391"
	I1225 12:50:03.655567 1466525 command_runner.go:130] >   uid: 1c94dbaf-9e87-4c5a-a00d-da7d7c13d59d
	I1225 12:50:03.657984 1466525 node_ready.go:35] waiting up to 6m0s for node "multinode-544936" to be "Ready" ...
	I1225 12:50:03.658018 1466525 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 12:50:03.733365 1466525 request.go:629] Waited for 75.23581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:03.733427 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:03.733432 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:03.733440 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:03.733449 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:03.736414 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:03.736443 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:03.736454 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:03.736462 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:03.736469 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:03.736477 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:03.736485 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:03 GMT
	I1225 12:50:03.736493 1466525 round_trippers.go:580]     Audit-Id: b4c894f0-299a-4f06-9df8-da234ff3d86b
	I1225 12:50:03.736629 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"739","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1225 12:50:04.158753 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:04.158786 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:04.158799 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:04.158808 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:04.161485 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:04.161515 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:04.161524 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:04.161532 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:04 GMT
	I1225 12:50:04.161539 1466525 round_trippers.go:580]     Audit-Id: fb2157dc-6d8f-4a61-a228-e632bf4a9625
	I1225 12:50:04.161547 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:04.161555 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:04.161563 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:04.162327 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"739","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1225 12:50:04.659094 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:04.659121 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:04.659130 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:04.659136 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:04.661940 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:04.661971 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:04.661983 1466525 round_trippers.go:580]     Audit-Id: 364154b3-425d-427c-aa26-487ba8a93495
	I1225 12:50:04.661991 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:04.661999 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:04.662015 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:04.662024 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:04.662031 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:04 GMT
	I1225 12:50:04.662253 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"739","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1225 12:50:05.158898 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:05.158927 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:05.158936 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:05.158945 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:05.161523 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:05.161551 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:05.161558 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:05.161564 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:05.161569 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:05 GMT
	I1225 12:50:05.161575 1466525 round_trippers.go:580]     Audit-Id: 2144da09-dd29-4c20-a065-a08bc2a30447
	I1225 12:50:05.161580 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:05.161603 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:05.161914 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:05.162272 1466525 node_ready.go:49] node "multinode-544936" has status "Ready":"True"
	I1225 12:50:05.162290 1466525 node_ready.go:38] duration metric: took 1.504274882s waiting for node "multinode-544936" to be "Ready" ...
	I1225 12:50:05.162300 1466525 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:50:05.162373 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:50:05.162382 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:05.162388 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:05.162400 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:05.166667 1466525 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:50:05.166689 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:05.166699 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:05 GMT
	I1225 12:50:05.166706 1466525 round_trippers.go:580]     Audit-Id: c8f721bb-8213-4ae6-a1fb-6b6c586f13c2
	I1225 12:50:05.166713 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:05.166720 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:05.166728 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:05.166737 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:05.168198 1466525 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"856"},"items":[{"metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"768","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82917 chars]
	I1225 12:50:05.170672 1466525 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:05.170769 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:50:05.170784 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:05.170795 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:05.170804 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:05.173951 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:05.173973 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:05.173981 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:05 GMT
	I1225 12:50:05.173988 1466525 round_trippers.go:580]     Audit-Id: f07c2b15-2d5d-464a-8407-f6765a6b40eb
	I1225 12:50:05.173996 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:05.174008 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:05.174019 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:05.174028 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:05.174301 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"768","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1225 12:50:05.174848 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:05.174864 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:05.174872 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:05.174880 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:05.177045 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:05.177064 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:05.177088 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:05.177096 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:05 GMT
	I1225 12:50:05.177104 1466525 round_trippers.go:580]     Audit-Id: 7d80dbe2-fb4f-4542-9d4b-875058645e75
	I1225 12:50:05.177112 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:05.177124 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:05.177134 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:05.177383 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:05.671312 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:50:05.671344 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:05.671357 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:05.671367 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:05.675500 1466525 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:50:05.675523 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:05.675529 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:05.675535 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:05 GMT
	I1225 12:50:05.675541 1466525 round_trippers.go:580]     Audit-Id: b29605bd-14c7-434a-a1e4-1ba233e9d68f
	I1225 12:50:05.675546 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:05.675551 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:05.675558 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:05.675830 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"768","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1225 12:50:05.676304 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:05.676320 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:05.676330 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:05.676339 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:05.678803 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:05.678828 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:05.678839 1466525 round_trippers.go:580]     Audit-Id: 2570005c-af89-46fa-a756-268c91785070
	I1225 12:50:05.678848 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:05.678855 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:05.678860 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:05.678865 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:05.678871 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:05 GMT
	I1225 12:50:05.679148 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:06.171889 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:50:06.171922 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:06.171931 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:06.171937 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:06.175398 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:06.175417 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:06.175426 1466525 round_trippers.go:580]     Audit-Id: e0f3d974-fcc2-40ca-9e77-1041ab8ebc1c
	I1225 12:50:06.175435 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:06.175443 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:06.175451 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:06.175458 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:06.175465 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:06 GMT
	I1225 12:50:06.175724 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"768","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1225 12:50:06.176334 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:06.176352 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:06.176364 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:06.176374 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:06.179111 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:06.179130 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:06.179140 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:06 GMT
	I1225 12:50:06.179148 1466525 round_trippers.go:580]     Audit-Id: 06de2272-7a24-46f8-b70f-ca606908f620
	I1225 12:50:06.179155 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:06.179163 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:06.179170 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:06.179178 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:06.179520 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:06.671213 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:50:06.671243 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:06.671252 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:06.671258 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:06.674596 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:06.674625 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:06.674636 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:06 GMT
	I1225 12:50:06.674646 1466525 round_trippers.go:580]     Audit-Id: a1d64706-f93f-4e64-b039-df734abea341
	I1225 12:50:06.674655 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:06.674662 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:06.674668 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:06.674674 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:06.674965 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"768","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1225 12:50:06.675446 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:06.675462 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:06.675470 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:06.675482 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:06.678002 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:06.678019 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:06.678026 1466525 round_trippers.go:580]     Audit-Id: 2141d6fd-e1e3-4200-a944-388a5e6d8a8f
	I1225 12:50:06.678031 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:06.678043 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:06.678051 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:06.678063 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:06.678075 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:06 GMT
	I1225 12:50:06.678404 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:07.171454 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:50:07.171480 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:07.171489 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:07.171503 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:07.176485 1466525 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:50:07.176518 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:07.176529 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:07.176537 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:07.176544 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:07.176554 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:07 GMT
	I1225 12:50:07.176569 1466525 round_trippers.go:580]     Audit-Id: da92ec9d-dba4-442a-bc9f-d349dad4a2ef
	I1225 12:50:07.176577 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:07.176844 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"768","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1225 12:50:07.177518 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:07.177541 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:07.177551 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:07.177559 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:07.181181 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:07.181206 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:07.181214 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:07.181222 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:07.181229 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:07.181237 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:07 GMT
	I1225 12:50:07.181244 1466525 round_trippers.go:580]     Audit-Id: de43a2ff-ad87-4578-9ed2-5268679ef502
	I1225 12:50:07.181253 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:07.181436 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:07.181889 1466525 pod_ready.go:102] pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace has status "Ready":"False"
	I1225 12:50:07.671041 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:50:07.671066 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:07.671075 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:07.671082 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:07.674066 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:07.674096 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:07.674109 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:07.674115 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:07 GMT
	I1225 12:50:07.674121 1466525 round_trippers.go:580]     Audit-Id: 12ffa901-01db-46c2-8f4f-00f81b17ab57
	I1225 12:50:07.674126 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:07.674131 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:07.674136 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:07.674294 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"768","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1225 12:50:07.674971 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:07.674990 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:07.675005 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:07.675014 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:07.678019 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:07.678040 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:07.678049 1466525 round_trippers.go:580]     Audit-Id: 48b15062-6259-4912-94bc-ab4a8322fa31
	I1225 12:50:07.678057 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:07.678064 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:07.678080 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:07.678092 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:07.678104 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:07 GMT
	I1225 12:50:07.678251 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:08.170920 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:50:08.170947 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:08.170956 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:08.170962 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:08.173904 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:08.173930 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:08.173942 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:08 GMT
	I1225 12:50:08.173951 1466525 round_trippers.go:580]     Audit-Id: e823f015-3a87-4328-8858-6f03ec8595b7
	I1225 12:50:08.173958 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:08.173965 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:08.173974 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:08.173982 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:08.174232 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"768","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1225 12:50:08.174772 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:08.174797 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:08.174805 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:08.174810 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:08.176720 1466525 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1225 12:50:08.176735 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:08.176741 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:08.176749 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:08 GMT
	I1225 12:50:08.176758 1466525 round_trippers.go:580]     Audit-Id: 60441518-3d4a-48be-ab4b-f0a4519c9f06
	I1225 12:50:08.176771 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:08.176779 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:08.176789 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:08.176914 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:08.671672 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:50:08.671699 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:08.671709 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:08.671715 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:08.675721 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:08.675744 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:08.675751 1466525 round_trippers.go:580]     Audit-Id: 57eda65c-6f2b-45d1-a487-6b127885029f
	I1225 12:50:08.675757 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:08.675762 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:08.675767 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:08.675772 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:08.675777 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:08 GMT
	I1225 12:50:08.676109 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"864","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1225 12:50:08.676583 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:08.676597 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:08.676605 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:08.676611 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:08.693853 1466525 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1225 12:50:08.693878 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:08.693885 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:08 GMT
	I1225 12:50:08.693891 1466525 round_trippers.go:580]     Audit-Id: 132b672e-d179-453e-86b8-915070f1d045
	I1225 12:50:08.693898 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:08.693903 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:08.693908 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:08.693915 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:08.694576 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:08.694917 1466525 pod_ready.go:92] pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace has status "Ready":"True"
	I1225 12:50:08.694938 1466525 pod_ready.go:81] duration metric: took 3.524244733s waiting for pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:08.694951 1466525 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:08.695042 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:50:08.695052 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:08.695062 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:08.695072 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:08.702311 1466525 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1225 12:50:08.702331 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:08.702337 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:08.702343 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:08.702348 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:08 GMT
	I1225 12:50:08.702353 1466525 round_trippers.go:580]     Audit-Id: c63baeff-63f3-4767-85ee-7a2d4eed1eb2
	I1225 12:50:08.702358 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:08.702363 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:08.702926 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"765","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1225 12:50:08.703326 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:08.703336 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:08.703344 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:08.703350 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:08.712831 1466525 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1225 12:50:08.712868 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:08.712876 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:08.712884 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:08 GMT
	I1225 12:50:08.712889 1466525 round_trippers.go:580]     Audit-Id: c4268d6c-22e8-48d8-89eb-d92fddddd405
	I1225 12:50:08.712894 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:08.712899 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:08.712903 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:08.713032 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:09.195323 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:50:09.195350 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:09.195359 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:09.195366 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:09.198579 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:09.198612 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:09.198624 1466525 round_trippers.go:580]     Audit-Id: ccf054b9-c254-49eb-9a91-41ce1f6b6cb3
	I1225 12:50:09.198633 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:09.198642 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:09.198652 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:09.198657 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:09.198663 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:09 GMT
	I1225 12:50:09.198815 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"765","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1225 12:50:09.199369 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:09.199387 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:09.199399 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:09.199408 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:09.202101 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:09.202128 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:09.202138 1466525 round_trippers.go:580]     Audit-Id: 6c038d5b-7805-4d65-b023-4f92dbf46a63
	I1225 12:50:09.202147 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:09.202156 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:09.202164 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:09.202173 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:09.202185 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:09 GMT
	I1225 12:50:09.202326 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:09.695305 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:50:09.695330 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:09.695339 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:09.695345 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:09.698090 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:09.698118 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:09.698142 1466525 round_trippers.go:580]     Audit-Id: 75886d12-6de7-4193-badc-462dfb84dd55
	I1225 12:50:09.698148 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:09.698156 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:09.698164 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:09.698172 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:09.698180 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:09 GMT
	I1225 12:50:09.698733 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"765","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1225 12:50:09.699200 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:09.699218 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:09.699226 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:09.699232 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:09.701664 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:09.701687 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:09.701697 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:09.701705 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:09 GMT
	I1225 12:50:09.701711 1466525 round_trippers.go:580]     Audit-Id: ac60dd7d-f020-40d8-a0f7-459f1b0b43f9
	I1225 12:50:09.701718 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:09.701726 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:09.701744 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:09.702908 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:10.195575 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:50:10.195607 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:10.195617 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:10.195626 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:10.198584 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:10.198616 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:10.198623 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:10 GMT
	I1225 12:50:10.198629 1466525 round_trippers.go:580]     Audit-Id: b2b81576-d460-490e-86b9-09db04b67f27
	I1225 12:50:10.198634 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:10.198639 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:10.198644 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:10.198649 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:10.198842 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"765","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1225 12:50:10.199442 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:10.199460 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:10.199471 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:10.199480 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:10.201965 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:10.201988 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:10.201999 1466525 round_trippers.go:580]     Audit-Id: edf6f297-f851-4418-9a3e-a76e006f52e0
	I1225 12:50:10.202005 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:10.202011 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:10.202022 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:10.202031 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:10.202036 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:10 GMT
	I1225 12:50:10.202345 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:10.695217 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:50:10.695249 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:10.695259 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:10.695266 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:10.698133 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:10.698161 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:10.698171 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:10.698198 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:10.698203 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:10 GMT
	I1225 12:50:10.698208 1466525 round_trippers.go:580]     Audit-Id: 54df7a7c-9488-4cfa-b543-994683f7d10d
	I1225 12:50:10.698214 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:10.698221 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:10.698631 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"765","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1225 12:50:10.699147 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:10.699167 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:10.699175 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:10.699181 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:10.701574 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:10.701596 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:10.701606 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:10.701613 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:10.701619 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:10.701623 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:10.701629 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:10 GMT
	I1225 12:50:10.701636 1466525 round_trippers.go:580]     Audit-Id: 088d0428-2c89-4417-b324-fd0a584eed82
	I1225 12:50:10.701917 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:10.702243 1466525 pod_ready.go:102] pod "etcd-multinode-544936" in "kube-system" namespace has status "Ready":"False"
	I1225 12:50:11.195570 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:50:11.195595 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:11.195603 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:11.195609 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:11.198841 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:11.198863 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:11.198870 1466525 round_trippers.go:580]     Audit-Id: 095aeeb9-338a-425e-b9fe-d570bf7a2146
	I1225 12:50:11.198876 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:11.198881 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:11.198886 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:11.198891 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:11.198896 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:11 GMT
	I1225 12:50:11.199910 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"765","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1225 12:50:11.200330 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:11.200346 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:11.200354 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:11.200359 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:11.203299 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:11.203314 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:11.203320 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:11.203326 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:11 GMT
	I1225 12:50:11.203333 1466525 round_trippers.go:580]     Audit-Id: 29dc7d73-6378-4a30-8333-27ba578d5605
	I1225 12:50:11.203340 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:11.203348 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:11.203356 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:11.203828 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:11.695514 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:50:11.695540 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:11.695549 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:11.695555 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:11.699446 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:11.699471 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:11.699479 1466525 round_trippers.go:580]     Audit-Id: 46fa18f7-b664-4de0-a521-5cbb2965b79a
	I1225 12:50:11.699485 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:11.699490 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:11.699496 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:11.699528 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:11.699537 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:11 GMT
	I1225 12:50:11.700073 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"765","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1225 12:50:11.700493 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:11.700507 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:11.700514 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:11.700520 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:11.702595 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:11.702617 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:11.702628 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:11.702643 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:11 GMT
	I1225 12:50:11.702652 1466525 round_trippers.go:580]     Audit-Id: 18ba6749-91d3-4326-85ea-f851d1256f0e
	I1225 12:50:11.702661 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:11.702673 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:11.702682 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:11.702887 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:12.196139 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:50:12.196167 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:12.196176 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:12.196182 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:12.199468 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:12.199500 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:12.199509 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:12.199515 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:12 GMT
	I1225 12:50:12.199521 1466525 round_trippers.go:580]     Audit-Id: 3e292081-3579-4177-afad-99b6e2d53cd7
	I1225 12:50:12.199526 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:12.199559 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:12.199568 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:12.200260 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"765","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1225 12:50:12.200687 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:12.200701 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:12.200710 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:12.200715 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:12.210277 1466525 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1225 12:50:12.210322 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:12.210334 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:12.210342 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:12.210351 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:12.210360 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:12 GMT
	I1225 12:50:12.210369 1466525 round_trippers.go:580]     Audit-Id: 21d71265-dd75-427a-8fce-58985618ed97
	I1225 12:50:12.210377 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:12.210551 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:12.695270 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:50:12.695304 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:12.695313 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:12.695320 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:12.698146 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:12.698170 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:12.698178 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:12.698186 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:12.698191 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:12.698197 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:12.698202 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:12 GMT
	I1225 12:50:12.698207 1466525 round_trippers.go:580]     Audit-Id: 579db49c-606e-4812-97f1-289806538897
	I1225 12:50:12.698464 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"884","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1225 12:50:12.698984 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:12.699006 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:12.699014 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:12.699020 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:12.701353 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:12.701377 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:12.701385 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:12 GMT
	I1225 12:50:12.701391 1466525 round_trippers.go:580]     Audit-Id: 37538fa9-bfd1-43af-aed2-1b4ebf7a02d5
	I1225 12:50:12.701396 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:12.701401 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:12.701406 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:12.701411 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:12.701718 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:12.702046 1466525 pod_ready.go:92] pod "etcd-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:50:12.702065 1466525 pod_ready.go:81] duration metric: took 4.007107181s waiting for pod "etcd-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:12.702081 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:12.702187 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-544936
	I1225 12:50:12.702203 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:12.702214 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:12.702227 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:12.704679 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:12.704706 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:12.704721 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:12.704736 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:12 GMT
	I1225 12:50:12.704745 1466525 round_trippers.go:580]     Audit-Id: 6b70669f-04cd-4f3a-a49e-70aaaf7ed00f
	I1225 12:50:12.704754 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:12.704763 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:12.704772 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:12.704989 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-544936","namespace":"kube-system","uid":"d0fda9c8-27cf-4ecc-b379-39745cb7ec19","resourceVersion":"874","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.21:8443","kubernetes.io/config.hash":"b7cd9addac4657510db86c61386c4e6f","kubernetes.io/config.mirror":"b7cd9addac4657510db86c61386c4e6f","kubernetes.io/config.seen":"2023-12-25T12:39:31.216607492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1225 12:50:12.705577 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:12.705595 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:12.705606 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:12.705615 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:12.707813 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:12.707831 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:12.707837 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:12.707843 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:12.707848 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:12.707857 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:12 GMT
	I1225 12:50:12.707865 1466525 round_trippers.go:580]     Audit-Id: 351a6d58-0980-4840-b7d4-0d0768d4c946
	I1225 12:50:12.707883 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:12.708085 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:12.708473 1466525 pod_ready.go:92] pod "kube-apiserver-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:50:12.708495 1466525 pod_ready.go:81] duration metric: took 6.407693ms waiting for pod "kube-apiserver-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:12.708506 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:12.708571 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-544936
	I1225 12:50:12.708582 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:12.708589 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:12.708595 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:12.710783 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:12.710807 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:12.710817 1466525 round_trippers.go:580]     Audit-Id: 793f0fac-3a82-4f91-91af-e2aafd5f3590
	I1225 12:50:12.710825 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:12.710833 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:12.710841 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:12.710853 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:12.710867 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:12 GMT
	I1225 12:50:12.711041 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-544936","namespace":"kube-system","uid":"e8837ba4-e0a0-4bec-a702-df5e7e9ce1c0","resourceVersion":"858","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dcbd1114ea0bb0064cc87c1b2d706f29","kubernetes.io/config.mirror":"dcbd1114ea0bb0064cc87c1b2d706f29","kubernetes.io/config.seen":"2023-12-25T12:39:31.216608577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1225 12:50:12.711541 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:12.711558 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:12.711565 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:12.711571 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:12.713570 1466525 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1225 12:50:12.713591 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:12.713608 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:12.713620 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:12.713632 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:12.713642 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:12.713655 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:12 GMT
	I1225 12:50:12.713661 1466525 round_trippers.go:580]     Audit-Id: 4879078b-5be7-405d-ad39-7205f4c192b0
	I1225 12:50:12.713800 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:12.714126 1466525 pod_ready.go:92] pod "kube-controller-manager-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:50:12.714146 1466525 pod_ready.go:81] duration metric: took 5.63382ms waiting for pod "kube-controller-manager-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:12.714159 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7z5x6" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:12.714226 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z5x6
	I1225 12:50:12.714233 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:12.714240 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:12.714248 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:12.716939 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:12.716963 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:12.716972 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:12.716981 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:12.716997 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:12.717006 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:12.717015 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:12 GMT
	I1225 12:50:12.717023 1466525 round_trippers.go:580]     Audit-Id: d0f42a89-90e9-45d9-9224-48c4139dc18b
	I1225 12:50:12.717867 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7z5x6","generateName":"kube-proxy-","namespace":"kube-system","uid":"304c848e-4ecf-433d-a17d-b1b33784ae08","resourceVersion":"507","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1225 12:50:12.718360 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:50:12.718380 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:12.718390 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:12.718399 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:12.720864 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:12.720882 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:12.720889 1466525 round_trippers.go:580]     Audit-Id: 7f41de61-2ef9-4f00-9226-0c8a727fcf90
	I1225 12:50:12.720895 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:12.720900 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:12.720906 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:12.720913 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:12.720919 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:12 GMT
	I1225 12:50:12.721064 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0","resourceVersion":"737","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_42_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4236 chars]
	I1225 12:50:12.721404 1466525 pod_ready.go:92] pod "kube-proxy-7z5x6" in "kube-system" namespace has status "Ready":"True"
	I1225 12:50:12.721423 1466525 pod_ready.go:81] duration metric: took 7.257395ms waiting for pod "kube-proxy-7z5x6" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:12.721432 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gkxgw" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:12.733936 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkxgw
	I1225 12:50:12.733985 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:12.733998 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:12.734005 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:12.737104 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:12.737131 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:12.737141 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:12.737150 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:12.737158 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:12.737166 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:12.737178 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:12 GMT
	I1225 12:50:12.737198 1466525 round_trippers.go:580]     Audit-Id: ee5f5e29-6a57-4f7f-b6d6-6ba4c02e8f7e
	I1225 12:50:12.737350 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gkxgw","generateName":"kube-proxy-","namespace":"kube-system","uid":"d14fbb1d-1200-463f-bd2b-17943371448c","resourceVersion":"714","creationTimestamp":"2023-12-25T12:41:20Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1225 12:50:12.933151 1466525 request.go:629] Waited for 195.337896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m03
	I1225 12:50:12.933267 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m03
	I1225 12:50:12.933273 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:12.933282 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:12.933291 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:12.936195 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:12.936221 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:12.936229 1466525 round_trippers.go:580]     Audit-Id: 59754e3d-08fe-4d4b-85c8-a0e5fa9fc417
	I1225 12:50:12.936265 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:12.936275 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:12.936280 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:12.936285 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:12.936293 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:12 GMT
	I1225 12:50:12.936602 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m03","uid":"3744762d-9d11-4193-82ab-cd70245fefca","resourceVersion":"878","creationTimestamp":"2023-12-25T12:42:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_42_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:42:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3964 chars]
	I1225 12:50:12.936976 1466525 pod_ready.go:92] pod "kube-proxy-gkxgw" in "kube-system" namespace has status "Ready":"True"
	I1225 12:50:12.936997 1466525 pod_ready.go:81] duration metric: took 215.559294ms waiting for pod "kube-proxy-gkxgw" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:12.937007 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4jc7" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:13.133482 1466525 request.go:629] Waited for 196.389605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4jc7
	I1225 12:50:13.133557 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4jc7
	I1225 12:50:13.133562 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:13.133570 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:13.133576 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:13.136557 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:13.136592 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:13.136602 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:13.136612 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:13.136620 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:13.136627 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:13.136636 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:13 GMT
	I1225 12:50:13.136654 1466525 round_trippers.go:580]     Audit-Id: 2691a23d-4147-4f42-a184-498489b050ff
	I1225 12:50:13.136874 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k4jc7","generateName":"kube-proxy-","namespace":"kube-system","uid":"14699a0d-601b-4bc3-9584-7ac67822a926","resourceVersion":"790","creationTimestamp":"2023-12-25T12:39:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1225 12:50:13.333842 1466525 request.go:629] Waited for 196.417004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:13.333924 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:13.333930 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:13.333947 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:13.333959 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:13.337326 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:13.337358 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:13.337369 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:13 GMT
	I1225 12:50:13.337378 1466525 round_trippers.go:580]     Audit-Id: a3236149-a99b-4b9e-9827-6e4a320eed65
	I1225 12:50:13.337393 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:13.337402 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:13.337410 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:13.337418 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:13.337581 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:13.338036 1466525 pod_ready.go:92] pod "kube-proxy-k4jc7" in "kube-system" namespace has status "Ready":"True"
	I1225 12:50:13.338056 1466525 pod_ready.go:81] duration metric: took 401.042611ms waiting for pod "kube-proxy-k4jc7" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:13.338071 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:13.533905 1466525 request.go:629] Waited for 195.727815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-544936
	I1225 12:50:13.534014 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-544936
	I1225 12:50:13.534025 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:13.534033 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:13.534042 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:13.537280 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:13.537309 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:13.537318 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:13.537326 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:13.537338 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:13 GMT
	I1225 12:50:13.537351 1466525 round_trippers.go:580]     Audit-Id: 3e71cbf4-907f-41d1-8de7-2afa66a3a81a
	I1225 12:50:13.537360 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:13.537368 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:13.537574 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-544936","namespace":"kube-system","uid":"e8027489-26d3-44c3-aeea-286e6689e75e","resourceVersion":"876","creationTimestamp":"2023-12-25T12:39:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0d8721061e771e9dc39fa5394fc12b4b","kubernetes.io/config.mirror":"0d8721061e771e9dc39fa5394fc12b4b","kubernetes.io/config.seen":"2023-12-25T12:39:22.819404471Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1225 12:50:13.733532 1466525 request.go:629] Waited for 195.408596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:13.733652 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:50:13.733669 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:13.733680 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:13.733694 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:13.736550 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:13.736580 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:13.736590 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:13 GMT
	I1225 12:50:13.736599 1466525 round_trippers.go:580]     Audit-Id: a2c0246c-98e5-4d24-9f6a-d6ae283e3a76
	I1225 12:50:13.736616 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:13.736629 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:13.736637 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:13.736646 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:13.736815 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1225 12:50:13.737235 1466525 pod_ready.go:92] pod "kube-scheduler-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:50:13.737259 1466525 pod_ready.go:81] duration metric: took 399.175719ms waiting for pod "kube-scheduler-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:50:13.737279 1466525 pod_ready.go:38] duration metric: took 8.574960226s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:50:13.737296 1466525 api_server.go:52] waiting for apiserver process to appear ...
	I1225 12:50:13.737358 1466525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 12:50:13.753634 1466525 command_runner.go:130] > 1095
	I1225 12:50:13.753682 1466525 api_server.go:72] duration metric: took 10.19640492s to wait for apiserver process to appear ...
	I1225 12:50:13.753694 1466525 api_server.go:88] waiting for apiserver healthz status ...
	I1225 12:50:13.753718 1466525 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I1225 12:50:13.759191 1466525 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I1225 12:50:13.759304 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/version
	I1225 12:50:13.759315 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:13.759328 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:13.759339 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:13.760417 1466525 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1225 12:50:13.760436 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:13.760443 1466525 round_trippers.go:580]     Audit-Id: e0ba3b85-38fd-49df-a3b2-2c4cd057047a
	I1225 12:50:13.760448 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:13.760464 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:13.760470 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:13.760475 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:13.760483 1466525 round_trippers.go:580]     Content-Length: 264
	I1225 12:50:13.760491 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:13 GMT
	I1225 12:50:13.760599 1466525 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1225 12:50:13.760677 1466525 api_server.go:141] control plane version: v1.28.4
	I1225 12:50:13.760720 1466525 api_server.go:131] duration metric: took 7.006036ms to wait for apiserver health ...
	I1225 12:50:13.760735 1466525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 12:50:13.933135 1466525 request.go:629] Waited for 172.30541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:50:13.933223 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:50:13.933229 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:13.933237 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:13.933243 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:13.937376 1466525 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:50:13.937419 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:13.937436 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:13.937449 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:13.937458 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:13 GMT
	I1225 12:50:13.937476 1466525 round_trippers.go:580]     Audit-Id: 23cc1140-7cd7-4f45-9db2-9ff67c6350ed
	I1225 12:50:13.937490 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:13.937495 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:13.938966 1466525 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"884"},"items":[{"metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"864","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81838 chars]
	I1225 12:50:13.941518 1466525 system_pods.go:59] 12 kube-system pods found
	I1225 12:50:13.941545 1466525 system_pods.go:61] "coredns-5dd5756b68-mg2zk" [4f4e21f4-8e73-4b81-a080-c42b6980ee3b] Running
	I1225 12:50:13.941550 1466525 system_pods.go:61] "etcd-multinode-544936" [8dc9103e-ec1a-40f4-80f8-4f4918bb5e33] Running
	I1225 12:50:13.941553 1466525 system_pods.go:61] "kindnet-2hjhm" [8cfe7daa-3fc7-485a-8794-117466297c5a] Running
	I1225 12:50:13.941563 1466525 system_pods.go:61] "kindnet-7cr8v" [2136f166-f4d1-4529-a932-010126e9fc7d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 12:50:13.941569 1466525 system_pods.go:61] "kindnet-mjlfm" [a8f29535-29de-4e87-a068-63a97cc46b60] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 12:50:13.941577 1466525 system_pods.go:61] "kube-apiserver-multinode-544936" [d0fda9c8-27cf-4ecc-b379-39745cb7ec19] Running
	I1225 12:50:13.941584 1466525 system_pods.go:61] "kube-controller-manager-multinode-544936" [e8837ba4-e0a0-4bec-a702-df5e7e9ce1c0] Running
	I1225 12:50:13.941588 1466525 system_pods.go:61] "kube-proxy-7z5x6" [304c848e-4ecf-433d-a17d-b1b33784ae08] Running
	I1225 12:50:13.941592 1466525 system_pods.go:61] "kube-proxy-gkxgw" [d14fbb1d-1200-463f-bd2b-17943371448c] Running
	I1225 12:50:13.941596 1466525 system_pods.go:61] "kube-proxy-k4jc7" [14699a0d-601b-4bc3-9584-7ac67822a926] Running
	I1225 12:50:13.941604 1466525 system_pods.go:61] "kube-scheduler-multinode-544936" [e8027489-26d3-44c3-aeea-286e6689e75e] Running
	I1225 12:50:13.941613 1466525 system_pods.go:61] "storage-provisioner" [897346ba-f39d-4771-913e-535bff9ca6b7] Running
	I1225 12:50:13.941619 1466525 system_pods.go:74] duration metric: took 180.877731ms to wait for pod list to return data ...
	I1225 12:50:13.941627 1466525 default_sa.go:34] waiting for default service account to be created ...
	I1225 12:50:14.133615 1466525 request.go:629] Waited for 191.903237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/default/serviceaccounts
	I1225 12:50:14.133713 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/default/serviceaccounts
	I1225 12:50:14.133720 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:14.133727 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:14.133734 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:14.136586 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:14.136606 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:14.136613 1466525 round_trippers.go:580]     Audit-Id: 61b3067a-1a29-409b-b664-e95aefaed012
	I1225 12:50:14.136619 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:14.136624 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:14.136630 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:14.136635 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:14.136641 1466525 round_trippers.go:580]     Content-Length: 261
	I1225 12:50:14.136646 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:14 GMT
	I1225 12:50:14.136673 1466525 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"884"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c31b3c66-4ba0-4c6f-b7ee-b896b98df101","resourceVersion":"337","creationTimestamp":"2023-12-25T12:39:43Z"}}]}
	I1225 12:50:14.136858 1466525 default_sa.go:45] found service account: "default"
	I1225 12:50:14.136875 1466525 default_sa.go:55] duration metric: took 195.240942ms for default service account to be created ...
	I1225 12:50:14.136883 1466525 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 12:50:14.333409 1466525 request.go:629] Waited for 196.440705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:50:14.333481 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:50:14.333485 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:14.333493 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:14.333499 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:14.337162 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:50:14.337189 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:14.337202 1466525 round_trippers.go:580]     Audit-Id: 4524d2dc-e7f9-4c61-8f9e-7054cfe88f2d
	I1225 12:50:14.337209 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:14.337217 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:14.337223 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:14.337230 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:14.337238 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:14 GMT
	I1225 12:50:14.338512 1466525 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"884"},"items":[{"metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"864","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81838 chars]
	I1225 12:50:14.341012 1466525 system_pods.go:86] 12 kube-system pods found
	I1225 12:50:14.341040 1466525 system_pods.go:89] "coredns-5dd5756b68-mg2zk" [4f4e21f4-8e73-4b81-a080-c42b6980ee3b] Running
	I1225 12:50:14.341047 1466525 system_pods.go:89] "etcd-multinode-544936" [8dc9103e-ec1a-40f4-80f8-4f4918bb5e33] Running
	I1225 12:50:14.341053 1466525 system_pods.go:89] "kindnet-2hjhm" [8cfe7daa-3fc7-485a-8794-117466297c5a] Running
	I1225 12:50:14.341064 1466525 system_pods.go:89] "kindnet-7cr8v" [2136f166-f4d1-4529-a932-010126e9fc7d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 12:50:14.341074 1466525 system_pods.go:89] "kindnet-mjlfm" [a8f29535-29de-4e87-a068-63a97cc46b60] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 12:50:14.341083 1466525 system_pods.go:89] "kube-apiserver-multinode-544936" [d0fda9c8-27cf-4ecc-b379-39745cb7ec19] Running
	I1225 12:50:14.341093 1466525 system_pods.go:89] "kube-controller-manager-multinode-544936" [e8837ba4-e0a0-4bec-a702-df5e7e9ce1c0] Running
	I1225 12:50:14.341104 1466525 system_pods.go:89] "kube-proxy-7z5x6" [304c848e-4ecf-433d-a17d-b1b33784ae08] Running
	I1225 12:50:14.341110 1466525 system_pods.go:89] "kube-proxy-gkxgw" [d14fbb1d-1200-463f-bd2b-17943371448c] Running
	I1225 12:50:14.341117 1466525 system_pods.go:89] "kube-proxy-k4jc7" [14699a0d-601b-4bc3-9584-7ac67822a926] Running
	I1225 12:50:14.341125 1466525 system_pods.go:89] "kube-scheduler-multinode-544936" [e8027489-26d3-44c3-aeea-286e6689e75e] Running
	I1225 12:50:14.341132 1466525 system_pods.go:89] "storage-provisioner" [897346ba-f39d-4771-913e-535bff9ca6b7] Running
	I1225 12:50:14.341149 1466525 system_pods.go:126] duration metric: took 204.252621ms to wait for k8s-apps to be running ...
	I1225 12:50:14.341161 1466525 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 12:50:14.341219 1466525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:50:14.356347 1466525 system_svc.go:56] duration metric: took 15.174908ms WaitForService to wait for kubelet.
	I1225 12:50:14.356378 1466525 kubeadm.go:581] duration metric: took 10.799103664s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 12:50:14.356401 1466525 node_conditions.go:102] verifying NodePressure condition ...
	I1225 12:50:14.533837 1466525 request.go:629] Waited for 177.344472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes
	I1225 12:50:14.533914 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes
	I1225 12:50:14.533919 1466525 round_trippers.go:469] Request Headers:
	I1225 12:50:14.533927 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:50:14.533933 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:50:14.536910 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:50:14.536943 1466525 round_trippers.go:577] Response Headers:
	I1225 12:50:14.536963 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:50:14.536978 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:50:14 GMT
	I1225 12:50:14.536987 1466525 round_trippers.go:580]     Audit-Id: 8db08fe7-cb40-4be8-b741-34fde4ecd18a
	I1225 12:50:14.536996 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:50:14.537005 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:50:14.537015 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:50:14.537282 1466525 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"884"},"items":[{"metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"856","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16178 chars]
	I1225 12:50:14.537868 1466525 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:50:14.537887 1466525 node_conditions.go:123] node cpu capacity is 2
	I1225 12:50:14.537898 1466525 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:50:14.537902 1466525 node_conditions.go:123] node cpu capacity is 2
	I1225 12:50:14.537905 1466525 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:50:14.537909 1466525 node_conditions.go:123] node cpu capacity is 2
	I1225 12:50:14.537913 1466525 node_conditions.go:105] duration metric: took 181.507571ms to run NodePressure ...
	I1225 12:50:14.537927 1466525 start.go:228] waiting for startup goroutines ...
	I1225 12:50:14.537933 1466525 start.go:233] waiting for cluster config update ...
	I1225 12:50:14.537940 1466525 start.go:242] writing updated cluster config ...
	I1225 12:50:14.538399 1466525 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:50:14.538520 1466525 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/config.json ...
	I1225 12:50:14.540746 1466525 out.go:177] * Starting worker node multinode-544936-m02 in cluster multinode-544936
	I1225 12:50:14.541989 1466525 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 12:50:14.542009 1466525 cache.go:56] Caching tarball of preloaded images
	I1225 12:50:14.542132 1466525 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 12:50:14.542146 1466525 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1225 12:50:14.542246 1466525 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/config.json ...
	I1225 12:50:14.542411 1466525 start.go:365] acquiring machines lock for multinode-544936-m02: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 12:50:14.542474 1466525 start.go:369] acquired machines lock for "multinode-544936-m02" in 41.95µs
	I1225 12:50:14.542491 1466525 start.go:96] Skipping create...Using existing machine configuration
	I1225 12:50:14.542498 1466525 fix.go:54] fixHost starting: m02
	I1225 12:50:14.542776 1466525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:50:14.542802 1466525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:50:14.557975 1466525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I1225 12:50:14.558447 1466525 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:50:14.558905 1466525 main.go:141] libmachine: Using API Version  1
	I1225 12:50:14.558927 1466525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:50:14.559358 1466525 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:50:14.559521 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:50:14.559700 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetState
	I1225 12:50:14.561460 1466525 fix.go:102] recreateIfNeeded on multinode-544936-m02: state=Running err=<nil>
	W1225 12:50:14.561476 1466525 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 12:50:14.563251 1466525 out.go:177] * Updating the running kvm2 "multinode-544936-m02" VM ...
	I1225 12:50:14.565279 1466525 machine.go:88] provisioning docker machine ...
	I1225 12:50:14.565312 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:50:14.565544 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetMachineName
	I1225 12:50:14.565728 1466525 buildroot.go:166] provisioning hostname "multinode-544936-m02"
	I1225 12:50:14.565748 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetMachineName
	I1225 12:50:14.565900 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:50:14.568717 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:50:14.569186 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:50:14.569215 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:50:14.569385 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:50:14.569559 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:50:14.569699 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:50:14.569868 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:50:14.570061 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:50:14.570452 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1225 12:50:14.570473 1466525 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-544936-m02 && echo "multinode-544936-m02" | sudo tee /etc/hostname
	I1225 12:50:14.713742 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-544936-m02
	
	I1225 12:50:14.713784 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:50:14.716758 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:50:14.717118 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:50:14.717148 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:50:14.717341 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:50:14.717604 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:50:14.717802 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:50:14.718052 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:50:14.718254 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:50:14.718683 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1225 12:50:14.718706 1466525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-544936-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-544936-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-544936-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 12:50:14.847694 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 12:50:14.847750 1466525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 12:50:14.847766 1466525 buildroot.go:174] setting up certificates
	I1225 12:50:14.847778 1466525 provision.go:83] configureAuth start
	I1225 12:50:14.847790 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetMachineName
	I1225 12:50:14.848158 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetIP
	I1225 12:50:14.851019 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:50:14.851466 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:50:14.851498 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:50:14.851655 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:50:14.854227 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:50:14.854664 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:50:14.854689 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:50:14.854814 1466525 provision.go:138] copyHostCerts
	I1225 12:50:14.854847 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 12:50:14.854880 1466525 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 12:50:14.854889 1466525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 12:50:14.854953 1466525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 12:50:14.855028 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 12:50:14.855045 1466525 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 12:50:14.855051 1466525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 12:50:14.855073 1466525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 12:50:14.855115 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 12:50:14.855138 1466525 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 12:50:14.855144 1466525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 12:50:14.855163 1466525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 12:50:14.855206 1466525 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.multinode-544936-m02 san=[192.168.39.205 192.168.39.205 localhost 127.0.0.1 minikube multinode-544936-m02]
	I1225 12:50:15.000309 1466525 provision.go:172] copyRemoteCerts
	I1225 12:50:15.000371 1466525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 12:50:15.000397 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:50:15.003323 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:50:15.003685 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:50:15.003714 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:50:15.003977 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:50:15.004209 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:50:15.004401 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:50:15.004545 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa Username:docker}
	I1225 12:50:15.096399 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1225 12:50:15.096505 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 12:50:15.118897 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1225 12:50:15.118968 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1225 12:50:15.143176 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1225 12:50:15.143279 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 12:50:15.165989 1466525 provision.go:86] duration metric: configureAuth took 318.194395ms
	I1225 12:50:15.166031 1466525 buildroot.go:189] setting minikube options for container-runtime
	I1225 12:50:15.166291 1466525 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:50:15.166417 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:50:15.169487 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:50:15.169897 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:50:15.169925 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:50:15.170183 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:50:15.170395 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:50:15.170632 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:50:15.170843 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:50:15.171056 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:50:15.171533 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1225 12:50:15.171555 1466525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 12:51:45.750158 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 12:51:45.750197 1466525 machine.go:91] provisioned docker machine in 1m31.184892798s
	I1225 12:51:45.750209 1466525 start.go:300] post-start starting for "multinode-544936-m02" (driver="kvm2")
	I1225 12:51:45.750221 1466525 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 12:51:45.750240 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:51:45.750603 1466525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 12:51:45.750655 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:51:45.753785 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:51:45.754276 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:51:45.754309 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:51:45.754486 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:51:45.754702 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:51:45.754913 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:51:45.755077 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa Username:docker}
	I1225 12:51:45.853193 1466525 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 12:51:45.857534 1466525 command_runner.go:130] > NAME=Buildroot
	I1225 12:51:45.857559 1466525 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1225 12:51:45.857566 1466525 command_runner.go:130] > ID=buildroot
	I1225 12:51:45.857578 1466525 command_runner.go:130] > VERSION_ID=2021.02.12
	I1225 12:51:45.857587 1466525 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1225 12:51:45.857627 1466525 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 12:51:45.857645 1466525 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 12:51:45.857738 1466525 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 12:51:45.857817 1466525 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 12:51:45.857827 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> /etc/ssl/certs/14497972.pem
	I1225 12:51:45.857908 1466525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 12:51:45.866638 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 12:51:45.890894 1466525 start.go:303] post-start completed in 140.665308ms
	I1225 12:51:45.890931 1466525 fix.go:56] fixHost completed within 1m31.348429894s
	I1225 12:51:45.890965 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:51:45.894233 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:51:45.894654 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:51:45.894696 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:51:45.894856 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:51:45.895109 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:51:45.895258 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:51:45.895421 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:51:45.895648 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:51:45.895981 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1225 12:51:45.895994 1466525 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 12:51:46.023665 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703508706.015246550
	
	I1225 12:51:46.023692 1466525 fix.go:206] guest clock: 1703508706.015246550
	I1225 12:51:46.023700 1466525 fix.go:219] Guest: 2023-12-25 12:51:46.01524655 +0000 UTC Remote: 2023-12-25 12:51:45.890938029 +0000 UTC m=+451.868116563 (delta=124.308521ms)
	I1225 12:51:46.023717 1466525 fix.go:190] guest clock delta is within tolerance: 124.308521ms
	I1225 12:51:46.023722 1466525 start.go:83] releasing machines lock for "multinode-544936-m02", held for 1m31.481236974s
	I1225 12:51:46.023752 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:51:46.024021 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetIP
	I1225 12:51:46.027121 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:51:46.027472 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:51:46.027512 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:51:46.029579 1466525 out.go:177] * Found network options:
	I1225 12:51:46.031150 1466525 out.go:177]   - NO_PROXY=192.168.39.21
	W1225 12:51:46.032659 1466525 proxy.go:119] fail to check proxy env: Error ip not in block
	I1225 12:51:46.032698 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:51:46.033496 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:51:46.033727 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:51:46.033837 1466525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 12:51:46.033893 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	W1225 12:51:46.033964 1466525 proxy.go:119] fail to check proxy env: Error ip not in block
	I1225 12:51:46.034052 1466525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 12:51:46.034071 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:51:46.036991 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:51:46.037079 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:51:46.037436 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:51:46.037460 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:51:46.037488 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:51:46.037508 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:51:46.037577 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:51:46.037770 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:51:46.037778 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:51:46.037944 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:51:46.037950 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:51:46.038125 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:51:46.038128 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa Username:docker}
	I1225 12:51:46.038274 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa Username:docker}
	I1225 12:51:46.279144 1466525 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1225 12:51:46.279281 1466525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1225 12:51:46.285107 1466525 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1225 12:51:46.285263 1466525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 12:51:46.285345 1466525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 12:51:46.293889 1466525 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 12:51:46.293919 1466525 start.go:475] detecting cgroup driver to use...
	I1225 12:51:46.293998 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 12:51:46.309007 1466525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 12:51:46.320826 1466525 docker.go:203] disabling cri-docker service (if available) ...
	I1225 12:51:46.320905 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 12:51:46.333480 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 12:51:46.346314 1466525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 12:51:46.481728 1466525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 12:51:46.610536 1466525 docker.go:219] disabling docker service ...
	I1225 12:51:46.610607 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 12:51:46.625644 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 12:51:46.639572 1466525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 12:51:46.783961 1466525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 12:51:46.916907 1466525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 12:51:46.929602 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 12:51:46.949561 1466525 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1225 12:51:46.949603 1466525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 12:51:46.949662 1466525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:51:46.960151 1466525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 12:51:46.960236 1466525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:51:46.969881 1466525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:51:46.979463 1466525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:51:46.989198 1466525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 12:51:46.999551 1466525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 12:51:47.008650 1466525 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1225 12:51:47.008824 1466525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 12:51:47.017131 1466525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 12:51:47.151076 1466525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 12:51:47.373751 1466525 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 12:51:47.373842 1466525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 12:51:47.379295 1466525 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1225 12:51:47.379323 1466525 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1225 12:51:47.379330 1466525 command_runner.go:130] > Device: 16h/22d	Inode: 1207        Links: 1
	I1225 12:51:47.379339 1466525 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1225 12:51:47.379344 1466525 command_runner.go:130] > Access: 2023-12-25 12:51:47.298438689 +0000
	I1225 12:51:47.379351 1466525 command_runner.go:130] > Modify: 2023-12-25 12:51:47.298438689 +0000
	I1225 12:51:47.379356 1466525 command_runner.go:130] > Change: 2023-12-25 12:51:47.298438689 +0000
	I1225 12:51:47.379360 1466525 command_runner.go:130] >  Birth: -
	I1225 12:51:47.379379 1466525 start.go:543] Will wait 60s for crictl version
	I1225 12:51:47.379434 1466525 ssh_runner.go:195] Run: which crictl
	I1225 12:51:47.383313 1466525 command_runner.go:130] > /usr/bin/crictl
	I1225 12:51:47.383670 1466525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 12:51:47.432969 1466525 command_runner.go:130] > Version:  0.1.0
	I1225 12:51:47.433005 1466525 command_runner.go:130] > RuntimeName:  cri-o
	I1225 12:51:47.433014 1466525 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1225 12:51:47.433025 1466525 command_runner.go:130] > RuntimeApiVersion:  v1
	I1225 12:51:47.433051 1466525 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 12:51:47.433133 1466525 ssh_runner.go:195] Run: crio --version
	I1225 12:51:47.476069 1466525 command_runner.go:130] > crio version 1.24.1
	I1225 12:51:47.476100 1466525 command_runner.go:130] > Version:          1.24.1
	I1225 12:51:47.476111 1466525 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1225 12:51:47.476118 1466525 command_runner.go:130] > GitTreeState:     dirty
	I1225 12:51:47.476127 1466525 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I1225 12:51:47.476133 1466525 command_runner.go:130] > GoVersion:        go1.19.9
	I1225 12:51:47.476138 1466525 command_runner.go:130] > Compiler:         gc
	I1225 12:51:47.476144 1466525 command_runner.go:130] > Platform:         linux/amd64
	I1225 12:51:47.476152 1466525 command_runner.go:130] > Linkmode:         dynamic
	I1225 12:51:47.476164 1466525 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1225 12:51:47.476174 1466525 command_runner.go:130] > SeccompEnabled:   true
	I1225 12:51:47.476182 1466525 command_runner.go:130] > AppArmorEnabled:  false
	I1225 12:51:47.476340 1466525 ssh_runner.go:195] Run: crio --version
	I1225 12:51:47.532198 1466525 command_runner.go:130] > crio version 1.24.1
	I1225 12:51:47.532227 1466525 command_runner.go:130] > Version:          1.24.1
	I1225 12:51:47.532235 1466525 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1225 12:51:47.532239 1466525 command_runner.go:130] > GitTreeState:     dirty
	I1225 12:51:47.532248 1466525 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I1225 12:51:47.532256 1466525 command_runner.go:130] > GoVersion:        go1.19.9
	I1225 12:51:47.532265 1466525 command_runner.go:130] > Compiler:         gc
	I1225 12:51:47.532272 1466525 command_runner.go:130] > Platform:         linux/amd64
	I1225 12:51:47.532281 1466525 command_runner.go:130] > Linkmode:         dynamic
	I1225 12:51:47.532306 1466525 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1225 12:51:47.532317 1466525 command_runner.go:130] > SeccompEnabled:   true
	I1225 12:51:47.532325 1466525 command_runner.go:130] > AppArmorEnabled:  false
	I1225 12:51:47.534365 1466525 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 12:51:47.535679 1466525 out.go:177]   - env NO_PROXY=192.168.39.21
	I1225 12:51:47.536956 1466525 main.go:141] libmachine: (multinode-544936-m02) Calling .GetIP
	I1225 12:51:47.539860 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:51:47.540280 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:51:47.540314 1466525 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:51:47.540549 1466525 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 12:51:47.545120 1466525 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1225 12:51:47.545349 1466525 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936 for IP: 192.168.39.205
	I1225 12:51:47.545373 1466525 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:51:47.545532 1466525 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 12:51:47.545582 1466525 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 12:51:47.545599 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1225 12:51:47.545620 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1225 12:51:47.545634 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1225 12:51:47.545647 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1225 12:51:47.545720 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 12:51:47.545767 1466525 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 12:51:47.545786 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 12:51:47.545818 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 12:51:47.545852 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 12:51:47.545887 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 12:51:47.545939 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 12:51:47.545977 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:51:47.545996 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem -> /usr/share/ca-certificates/1449797.pem
	I1225 12:51:47.546012 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> /usr/share/ca-certificates/14497972.pem
	I1225 12:51:47.546573 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 12:51:47.573996 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 12:51:47.600280 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 12:51:47.629312 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 12:51:47.652168 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 12:51:47.678015 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 12:51:47.704860 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 12:51:47.729083 1466525 ssh_runner.go:195] Run: openssl version
	I1225 12:51:47.735124 1466525 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1225 12:51:47.735300 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 12:51:47.745459 1466525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 12:51:47.750467 1466525 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 12:51:47.750605 1466525 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 12:51:47.750666 1466525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 12:51:47.755963 1466525 command_runner.go:130] > 3ec20f2e
	I1225 12:51:47.756228 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 12:51:47.765399 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 12:51:47.775653 1466525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:51:47.780389 1466525 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:51:47.780422 1466525 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:51:47.780469 1466525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:51:47.786080 1466525 command_runner.go:130] > b5213941
	I1225 12:51:47.786319 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 12:51:47.795436 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 12:51:47.806047 1466525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 12:51:47.811785 1466525 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 12:51:47.811822 1466525 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 12:51:47.811881 1466525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 12:51:47.817577 1466525 command_runner.go:130] > 51391683
	I1225 12:51:47.817912 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 12:51:47.827689 1466525 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 12:51:47.831967 1466525 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1225 12:51:47.832138 1466525 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1225 12:51:47.832254 1466525 ssh_runner.go:195] Run: crio config
	I1225 12:51:47.882087 1466525 command_runner.go:130] ! time="2023-12-25 12:51:47.874053602Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1225 12:51:47.882119 1466525 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1225 12:51:47.888983 1466525 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1225 12:51:47.889007 1466525 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1225 12:51:47.889013 1466525 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1225 12:51:47.889018 1466525 command_runner.go:130] > #
	I1225 12:51:47.889029 1466525 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1225 12:51:47.889039 1466525 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1225 12:51:47.889050 1466525 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1225 12:51:47.889064 1466525 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1225 12:51:47.889076 1466525 command_runner.go:130] > # reload'.
	I1225 12:51:47.889089 1466525 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1225 12:51:47.889101 1466525 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1225 12:51:47.889115 1466525 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1225 12:51:47.889127 1466525 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1225 12:51:47.889136 1466525 command_runner.go:130] > [crio]
	I1225 12:51:47.889149 1466525 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1225 12:51:47.889160 1466525 command_runner.go:130] > # containers images, in this directory.
	I1225 12:51:47.889170 1466525 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1225 12:51:47.889187 1466525 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1225 12:51:47.889200 1466525 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1225 12:51:47.889213 1466525 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1225 12:51:47.889226 1466525 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1225 12:51:47.889238 1466525 command_runner.go:130] > storage_driver = "overlay"
	I1225 12:51:47.889250 1466525 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1225 12:51:47.889262 1466525 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1225 12:51:47.889272 1466525 command_runner.go:130] > storage_option = [
	I1225 12:51:47.889283 1466525 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1225 12:51:47.889291 1466525 command_runner.go:130] > ]
	I1225 12:51:47.889304 1466525 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1225 12:51:47.889318 1466525 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1225 12:51:47.889328 1466525 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1225 12:51:47.889335 1466525 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1225 12:51:47.889344 1466525 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1225 12:51:47.889349 1466525 command_runner.go:130] > # always happen on a node reboot
	I1225 12:51:47.889355 1466525 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1225 12:51:47.889365 1466525 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1225 12:51:47.889374 1466525 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1225 12:51:47.889385 1466525 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1225 12:51:47.889393 1466525 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1225 12:51:47.889403 1466525 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1225 12:51:47.889413 1466525 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1225 12:51:47.889420 1466525 command_runner.go:130] > # internal_wipe = true
	I1225 12:51:47.889426 1466525 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1225 12:51:47.889434 1466525 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1225 12:51:47.889442 1466525 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1225 12:51:47.889450 1466525 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1225 12:51:47.889458 1466525 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1225 12:51:47.889464 1466525 command_runner.go:130] > [crio.api]
	I1225 12:51:47.889469 1466525 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1225 12:51:47.889476 1466525 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1225 12:51:47.889484 1466525 command_runner.go:130] > # IP address on which the stream server will listen.
	I1225 12:51:47.889491 1466525 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1225 12:51:47.889498 1466525 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1225 12:51:47.889505 1466525 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1225 12:51:47.889509 1466525 command_runner.go:130] > # stream_port = "0"
	I1225 12:51:47.889517 1466525 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1225 12:51:47.889523 1466525 command_runner.go:130] > # stream_enable_tls = false
	I1225 12:51:47.889529 1466525 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1225 12:51:47.889549 1466525 command_runner.go:130] > # stream_idle_timeout = ""
	I1225 12:51:47.889558 1466525 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1225 12:51:47.889567 1466525 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1225 12:51:47.889573 1466525 command_runner.go:130] > # minutes.
	I1225 12:51:47.889577 1466525 command_runner.go:130] > # stream_tls_cert = ""
	I1225 12:51:47.889585 1466525 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1225 12:51:47.889591 1466525 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1225 12:51:47.889597 1466525 command_runner.go:130] > # stream_tls_key = ""
	I1225 12:51:47.889603 1466525 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1225 12:51:47.889612 1466525 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1225 12:51:47.889622 1466525 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1225 12:51:47.889629 1466525 command_runner.go:130] > # stream_tls_ca = ""
	I1225 12:51:47.889636 1466525 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1225 12:51:47.889643 1466525 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1225 12:51:47.889650 1466525 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1225 12:51:47.889656 1466525 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1225 12:51:47.889675 1466525 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1225 12:51:47.889683 1466525 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1225 12:51:47.889688 1466525 command_runner.go:130] > [crio.runtime]
	I1225 12:51:47.889694 1466525 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1225 12:51:47.889702 1466525 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1225 12:51:47.889708 1466525 command_runner.go:130] > # "nofile=1024:2048"
	I1225 12:51:47.889715 1466525 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1225 12:51:47.889721 1466525 command_runner.go:130] > # default_ulimits = [
	I1225 12:51:47.889725 1466525 command_runner.go:130] > # ]
	I1225 12:51:47.889733 1466525 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1225 12:51:47.889746 1466525 command_runner.go:130] > # no_pivot = false
	I1225 12:51:47.889755 1466525 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1225 12:51:47.889762 1466525 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1225 12:51:47.889769 1466525 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1225 12:51:47.889775 1466525 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1225 12:51:47.889781 1466525 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1225 12:51:47.889788 1466525 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1225 12:51:47.889795 1466525 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1225 12:51:47.889799 1466525 command_runner.go:130] > # Cgroup setting for conmon
	I1225 12:51:47.889808 1466525 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1225 12:51:47.889815 1466525 command_runner.go:130] > conmon_cgroup = "pod"
	I1225 12:51:47.889824 1466525 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1225 12:51:47.889830 1466525 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1225 12:51:47.889839 1466525 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1225 12:51:47.889845 1466525 command_runner.go:130] > conmon_env = [
	I1225 12:51:47.889851 1466525 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1225 12:51:47.889856 1466525 command_runner.go:130] > ]
	I1225 12:51:47.889862 1466525 command_runner.go:130] > # Additional environment variables to set for all the
	I1225 12:51:47.889869 1466525 command_runner.go:130] > # containers. These are overridden if set in the
	I1225 12:51:47.889875 1466525 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1225 12:51:47.889881 1466525 command_runner.go:130] > # default_env = [
	I1225 12:51:47.889885 1466525 command_runner.go:130] > # ]
	I1225 12:51:47.889892 1466525 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1225 12:51:47.889899 1466525 command_runner.go:130] > # selinux = false
	I1225 12:51:47.889905 1466525 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1225 12:51:47.889913 1466525 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1225 12:51:47.889922 1466525 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1225 12:51:47.889928 1466525 command_runner.go:130] > # seccomp_profile = ""
	I1225 12:51:47.889934 1466525 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1225 12:51:47.889941 1466525 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1225 12:51:47.889947 1466525 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1225 12:51:47.889954 1466525 command_runner.go:130] > # which might increase security.
	I1225 12:51:47.889958 1466525 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1225 12:51:47.889967 1466525 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1225 12:51:47.889973 1466525 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1225 12:51:47.889981 1466525 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1225 12:51:47.889987 1466525 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1225 12:51:47.889994 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:51:47.889998 1466525 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1225 12:51:47.890004 1466525 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1225 12:51:47.890009 1466525 command_runner.go:130] > # the cgroup blockio controller.
	I1225 12:51:47.890017 1466525 command_runner.go:130] > # blockio_config_file = ""
	I1225 12:51:47.890023 1466525 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1225 12:51:47.890029 1466525 command_runner.go:130] > # irqbalance daemon.
	I1225 12:51:47.890034 1466525 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1225 12:51:47.890041 1466525 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1225 12:51:47.890049 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:51:47.890053 1466525 command_runner.go:130] > # rdt_config_file = ""
	I1225 12:51:47.890059 1466525 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1225 12:51:47.890063 1466525 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1225 12:51:47.890070 1466525 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1225 12:51:47.890075 1466525 command_runner.go:130] > # separate_pull_cgroup = ""
	I1225 12:51:47.890081 1466525 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1225 12:51:47.890089 1466525 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1225 12:51:47.890093 1466525 command_runner.go:130] > # will be added.
	I1225 12:51:47.890099 1466525 command_runner.go:130] > # default_capabilities = [
	I1225 12:51:47.890103 1466525 command_runner.go:130] > # 	"CHOWN",
	I1225 12:51:47.890107 1466525 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1225 12:51:47.890111 1466525 command_runner.go:130] > # 	"FSETID",
	I1225 12:51:47.890117 1466525 command_runner.go:130] > # 	"FOWNER",
	I1225 12:51:47.890120 1466525 command_runner.go:130] > # 	"SETGID",
	I1225 12:51:47.890124 1466525 command_runner.go:130] > # 	"SETUID",
	I1225 12:51:47.890129 1466525 command_runner.go:130] > # 	"SETPCAP",
	I1225 12:51:47.890133 1466525 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1225 12:51:47.890138 1466525 command_runner.go:130] > # 	"KILL",
	I1225 12:51:47.890141 1466525 command_runner.go:130] > # ]
	I1225 12:51:47.890150 1466525 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1225 12:51:47.890155 1466525 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1225 12:51:47.890162 1466525 command_runner.go:130] > # default_sysctls = [
	I1225 12:51:47.890165 1466525 command_runner.go:130] > # ]
	I1225 12:51:47.890172 1466525 command_runner.go:130] > # List of devices on the host that a
	I1225 12:51:47.890178 1466525 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1225 12:51:47.890184 1466525 command_runner.go:130] > # allowed_devices = [
	I1225 12:51:47.890188 1466525 command_runner.go:130] > # 	"/dev/fuse",
	I1225 12:51:47.890193 1466525 command_runner.go:130] > # ]
	I1225 12:51:47.890213 1466525 command_runner.go:130] > # List of additional devices. specified as
	I1225 12:51:47.890229 1466525 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1225 12:51:47.890241 1466525 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1225 12:51:47.890266 1466525 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1225 12:51:47.890276 1466525 command_runner.go:130] > # additional_devices = [
	I1225 12:51:47.890284 1466525 command_runner.go:130] > # ]
	I1225 12:51:47.890295 1466525 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1225 12:51:47.890305 1466525 command_runner.go:130] > # cdi_spec_dirs = [
	I1225 12:51:47.890311 1466525 command_runner.go:130] > # 	"/etc/cdi",
	I1225 12:51:47.890320 1466525 command_runner.go:130] > # 	"/var/run/cdi",
	I1225 12:51:47.890329 1466525 command_runner.go:130] > # ]
	I1225 12:51:47.890342 1466525 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1225 12:51:47.890353 1466525 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1225 12:51:47.890360 1466525 command_runner.go:130] > # Defaults to false.
	I1225 12:51:47.890365 1466525 command_runner.go:130] > # device_ownership_from_security_context = false
	I1225 12:51:47.890374 1466525 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1225 12:51:47.890382 1466525 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1225 12:51:47.890388 1466525 command_runner.go:130] > # hooks_dir = [
	I1225 12:51:47.890393 1466525 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1225 12:51:47.890398 1466525 command_runner.go:130] > # ]
	I1225 12:51:47.890404 1466525 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1225 12:51:47.890413 1466525 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1225 12:51:47.890421 1466525 command_runner.go:130] > # its default mounts from the following two files:
	I1225 12:51:47.890425 1466525 command_runner.go:130] > #
	I1225 12:51:47.890431 1466525 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1225 12:51:47.890454 1466525 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1225 12:51:47.890466 1466525 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1225 12:51:47.890474 1466525 command_runner.go:130] > #
	I1225 12:51:47.890481 1466525 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1225 12:51:47.890490 1466525 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1225 12:51:47.890498 1466525 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1225 12:51:47.890506 1466525 command_runner.go:130] > #      only add mounts it finds in this file.
	I1225 12:51:47.890511 1466525 command_runner.go:130] > #
	I1225 12:51:47.890516 1466525 command_runner.go:130] > # default_mounts_file = ""
	I1225 12:51:47.890523 1466525 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1225 12:51:47.890532 1466525 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1225 12:51:47.890538 1466525 command_runner.go:130] > pids_limit = 1024
	I1225 12:51:47.890544 1466525 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1225 12:51:47.890552 1466525 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1225 12:51:47.890560 1466525 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1225 12:51:47.890570 1466525 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1225 12:51:47.890577 1466525 command_runner.go:130] > # log_size_max = -1
	I1225 12:51:47.890584 1466525 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1225 12:51:47.890592 1466525 command_runner.go:130] > # log_to_journald = false
	I1225 12:51:47.890600 1466525 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1225 12:51:47.890607 1466525 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1225 12:51:47.890612 1466525 command_runner.go:130] > # Path to directory for container attach sockets.
	I1225 12:51:47.890619 1466525 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1225 12:51:47.890625 1466525 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1225 12:51:47.890632 1466525 command_runner.go:130] > # bind_mount_prefix = ""
	I1225 12:51:47.890637 1466525 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1225 12:51:47.890643 1466525 command_runner.go:130] > # read_only = false
	I1225 12:51:47.890650 1466525 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1225 12:51:47.890658 1466525 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1225 12:51:47.890664 1466525 command_runner.go:130] > # live configuration reload.
	I1225 12:51:47.890668 1466525 command_runner.go:130] > # log_level = "info"
	I1225 12:51:47.890676 1466525 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1225 12:51:47.890681 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:51:47.890687 1466525 command_runner.go:130] > # log_filter = ""
	I1225 12:51:47.890693 1466525 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1225 12:51:47.890701 1466525 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1225 12:51:47.890710 1466525 command_runner.go:130] > # separated by comma.
	I1225 12:51:47.890716 1466525 command_runner.go:130] > # uid_mappings = ""
	I1225 12:51:47.890722 1466525 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1225 12:51:47.890730 1466525 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1225 12:51:47.890742 1466525 command_runner.go:130] > # separated by comma.
	I1225 12:51:47.890749 1466525 command_runner.go:130] > # gid_mappings = ""
	I1225 12:51:47.890755 1466525 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1225 12:51:47.890763 1466525 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1225 12:51:47.890772 1466525 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1225 12:51:47.890778 1466525 command_runner.go:130] > # minimum_mappable_uid = -1
	I1225 12:51:47.890784 1466525 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1225 12:51:47.890793 1466525 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1225 12:51:47.890801 1466525 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1225 12:51:47.890807 1466525 command_runner.go:130] > # minimum_mappable_gid = -1
	I1225 12:51:47.890813 1466525 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1225 12:51:47.890821 1466525 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1225 12:51:47.890830 1466525 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1225 12:51:47.890836 1466525 command_runner.go:130] > # ctr_stop_timeout = 30
	I1225 12:51:47.890844 1466525 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1225 12:51:47.890853 1466525 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1225 12:51:47.890859 1466525 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1225 12:51:47.890864 1466525 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1225 12:51:47.890872 1466525 command_runner.go:130] > drop_infra_ctr = false
	I1225 12:51:47.890878 1466525 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1225 12:51:47.890886 1466525 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1225 12:51:47.890893 1466525 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1225 12:51:47.890900 1466525 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1225 12:51:47.890906 1466525 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1225 12:51:47.890913 1466525 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1225 12:51:47.890918 1466525 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1225 12:51:47.890926 1466525 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1225 12:51:47.890934 1466525 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1225 12:51:47.890941 1466525 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1225 12:51:47.890948 1466525 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1225 12:51:47.890956 1466525 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1225 12:51:47.890961 1466525 command_runner.go:130] > # default_runtime = "runc"
	I1225 12:51:47.890969 1466525 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1225 12:51:47.890978 1466525 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1225 12:51:47.890987 1466525 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1225 12:51:47.890995 1466525 command_runner.go:130] > # creation as a file is not desired either.
	I1225 12:51:47.891003 1466525 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1225 12:51:47.891011 1466525 command_runner.go:130] > # the hostname is being managed dynamically.
	I1225 12:51:47.891016 1466525 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1225 12:51:47.891021 1466525 command_runner.go:130] > # ]
	I1225 12:51:47.891027 1466525 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1225 12:51:47.891036 1466525 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1225 12:51:47.891042 1466525 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1225 12:51:47.891050 1466525 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1225 12:51:47.891054 1466525 command_runner.go:130] > #
	I1225 12:51:47.891061 1466525 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1225 12:51:47.891066 1466525 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1225 12:51:47.891072 1466525 command_runner.go:130] > #  runtime_type = "oci"
	I1225 12:51:47.891077 1466525 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1225 12:51:47.891083 1466525 command_runner.go:130] > #  privileged_without_host_devices = false
	I1225 12:51:47.891089 1466525 command_runner.go:130] > #  allowed_annotations = []
	I1225 12:51:47.891094 1466525 command_runner.go:130] > # Where:
	I1225 12:51:47.891099 1466525 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1225 12:51:47.891105 1466525 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1225 12:51:47.891114 1466525 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1225 12:51:47.891120 1466525 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1225 12:51:47.891126 1466525 command_runner.go:130] > #   in $PATH.
	I1225 12:51:47.891132 1466525 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1225 12:51:47.891139 1466525 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1225 12:51:47.891145 1466525 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1225 12:51:47.891151 1466525 command_runner.go:130] > #   state.
	I1225 12:51:47.891157 1466525 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1225 12:51:47.891164 1466525 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1225 12:51:47.891171 1466525 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1225 12:51:47.891178 1466525 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1225 12:51:47.891184 1466525 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1225 12:51:47.891191 1466525 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1225 12:51:47.891197 1466525 command_runner.go:130] > #   The currently recognized values are:
	I1225 12:51:47.891207 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1225 12:51:47.891222 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1225 12:51:47.891236 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1225 12:51:47.891248 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1225 12:51:47.891263 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1225 12:51:47.891276 1466525 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1225 12:51:47.891292 1466525 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1225 12:51:47.891304 1466525 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1225 12:51:47.891309 1466525 command_runner.go:130] > #   should be moved to the container's cgroup
	I1225 12:51:47.891314 1466525 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1225 12:51:47.891320 1466525 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1225 12:51:47.891324 1466525 command_runner.go:130] > runtime_type = "oci"
	I1225 12:51:47.891331 1466525 command_runner.go:130] > runtime_root = "/run/runc"
	I1225 12:51:47.891335 1466525 command_runner.go:130] > runtime_config_path = ""
	I1225 12:51:47.891339 1466525 command_runner.go:130] > monitor_path = ""
	I1225 12:51:47.891344 1466525 command_runner.go:130] > monitor_cgroup = ""
	I1225 12:51:47.891350 1466525 command_runner.go:130] > monitor_exec_cgroup = ""
	I1225 12:51:47.891356 1466525 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1225 12:51:47.891363 1466525 command_runner.go:130] > # running containers
	I1225 12:51:47.891368 1466525 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1225 12:51:47.891377 1466525 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1225 12:51:47.891408 1466525 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1225 12:51:47.891417 1466525 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1225 12:51:47.891422 1466525 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1225 12:51:47.891429 1466525 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1225 12:51:47.891434 1466525 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1225 12:51:47.891441 1466525 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1225 12:51:47.891446 1466525 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1225 12:51:47.891452 1466525 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1225 12:51:47.891459 1466525 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1225 12:51:47.891466 1466525 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1225 12:51:47.891472 1466525 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1225 12:51:47.891482 1466525 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1225 12:51:47.891491 1466525 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1225 12:51:47.891499 1466525 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1225 12:51:47.891509 1466525 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1225 12:51:47.891518 1466525 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1225 12:51:47.891524 1466525 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1225 12:51:47.891533 1466525 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1225 12:51:47.891539 1466525 command_runner.go:130] > # Example:
	I1225 12:51:47.891544 1466525 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1225 12:51:47.891551 1466525 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1225 12:51:47.891556 1466525 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1225 12:51:47.891563 1466525 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1225 12:51:47.891566 1466525 command_runner.go:130] > # cpuset = 0
	I1225 12:51:47.891571 1466525 command_runner.go:130] > # cpushares = "0-1"
	I1225 12:51:47.891576 1466525 command_runner.go:130] > # Where:
	I1225 12:51:47.891581 1466525 command_runner.go:130] > # The workload name is workload-type.
	I1225 12:51:47.891590 1466525 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1225 12:51:47.891595 1466525 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1225 12:51:47.891601 1466525 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1225 12:51:47.891609 1466525 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1225 12:51:47.891616 1466525 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1225 12:51:47.891620 1466525 command_runner.go:130] > # 
	I1225 12:51:47.891629 1466525 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1225 12:51:47.891634 1466525 command_runner.go:130] > #
	I1225 12:51:47.891640 1466525 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1225 12:51:47.891648 1466525 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1225 12:51:47.891654 1466525 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1225 12:51:47.891662 1466525 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1225 12:51:47.891668 1466525 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1225 12:51:47.891674 1466525 command_runner.go:130] > [crio.image]
	I1225 12:51:47.891679 1466525 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1225 12:51:47.891683 1466525 command_runner.go:130] > # default_transport = "docker://"
	I1225 12:51:47.891689 1466525 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1225 12:51:47.891698 1466525 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1225 12:51:47.891702 1466525 command_runner.go:130] > # global_auth_file = ""
	I1225 12:51:47.891707 1466525 command_runner.go:130] > # The image used to instantiate infra containers.
	I1225 12:51:47.891712 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:51:47.891717 1466525 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1225 12:51:47.891726 1466525 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1225 12:51:47.891733 1466525 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1225 12:51:47.891743 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:51:47.891750 1466525 command_runner.go:130] > # pause_image_auth_file = ""
	I1225 12:51:47.891757 1466525 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1225 12:51:47.891765 1466525 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1225 12:51:47.891771 1466525 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1225 12:51:47.891779 1466525 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1225 12:51:47.891786 1466525 command_runner.go:130] > # pause_command = "/pause"
	I1225 12:51:47.891792 1466525 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1225 12:51:47.891801 1466525 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1225 12:51:47.891809 1466525 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1225 12:51:47.891817 1466525 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1225 12:51:47.891825 1466525 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1225 12:51:47.891831 1466525 command_runner.go:130] > # signature_policy = ""
	I1225 12:51:47.891837 1466525 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1225 12:51:47.891846 1466525 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1225 12:51:47.891852 1466525 command_runner.go:130] > # changing them here.
	I1225 12:51:47.891856 1466525 command_runner.go:130] > # insecure_registries = [
	I1225 12:51:47.891865 1466525 command_runner.go:130] > # ]
	I1225 12:51:47.891875 1466525 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1225 12:51:47.891882 1466525 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1225 12:51:47.891889 1466525 command_runner.go:130] > # image_volumes = "mkdir"
	I1225 12:51:47.891894 1466525 command_runner.go:130] > # Temporary directory to use for storing big files
	I1225 12:51:47.891901 1466525 command_runner.go:130] > # big_files_temporary_dir = ""
	I1225 12:51:47.891907 1466525 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1225 12:51:47.891913 1466525 command_runner.go:130] > # CNI plugins.
	I1225 12:51:47.891917 1466525 command_runner.go:130] > [crio.network]
	I1225 12:51:47.891925 1466525 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1225 12:51:47.891933 1466525 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1225 12:51:47.891937 1466525 command_runner.go:130] > # cni_default_network = ""
	I1225 12:51:47.891945 1466525 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1225 12:51:47.891950 1466525 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1225 12:51:47.891958 1466525 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1225 12:51:47.891965 1466525 command_runner.go:130] > # plugin_dirs = [
	I1225 12:51:47.891969 1466525 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1225 12:51:47.891975 1466525 command_runner.go:130] > # ]
	I1225 12:51:47.891981 1466525 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1225 12:51:47.891987 1466525 command_runner.go:130] > [crio.metrics]
	I1225 12:51:47.891991 1466525 command_runner.go:130] > # Globally enable or disable metrics support.
	I1225 12:51:47.891998 1466525 command_runner.go:130] > enable_metrics = true
	I1225 12:51:47.892003 1466525 command_runner.go:130] > # Specify enabled metrics collectors.
	I1225 12:51:47.892010 1466525 command_runner.go:130] > # Per default all metrics are enabled.
	I1225 12:51:47.892016 1466525 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1225 12:51:47.892024 1466525 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1225 12:51:47.892032 1466525 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1225 12:51:47.892039 1466525 command_runner.go:130] > # metrics_collectors = [
	I1225 12:51:47.892042 1466525 command_runner.go:130] > # 	"operations",
	I1225 12:51:47.892049 1466525 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1225 12:51:47.892054 1466525 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1225 12:51:47.892060 1466525 command_runner.go:130] > # 	"operations_errors",
	I1225 12:51:47.892065 1466525 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1225 12:51:47.892071 1466525 command_runner.go:130] > # 	"image_pulls_by_name",
	I1225 12:51:47.892076 1466525 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1225 12:51:47.892083 1466525 command_runner.go:130] > # 	"image_pulls_failures",
	I1225 12:51:47.892087 1466525 command_runner.go:130] > # 	"image_pulls_successes",
	I1225 12:51:47.892096 1466525 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1225 12:51:47.892103 1466525 command_runner.go:130] > # 	"image_layer_reuse",
	I1225 12:51:47.892107 1466525 command_runner.go:130] > # 	"containers_oom_total",
	I1225 12:51:47.892113 1466525 command_runner.go:130] > # 	"containers_oom",
	I1225 12:51:47.892118 1466525 command_runner.go:130] > # 	"processes_defunct",
	I1225 12:51:47.892124 1466525 command_runner.go:130] > # 	"operations_total",
	I1225 12:51:47.892128 1466525 command_runner.go:130] > # 	"operations_latency_seconds",
	I1225 12:51:47.892133 1466525 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1225 12:51:47.892139 1466525 command_runner.go:130] > # 	"operations_errors_total",
	I1225 12:51:47.892143 1466525 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1225 12:51:47.892150 1466525 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1225 12:51:47.892154 1466525 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1225 12:51:47.892161 1466525 command_runner.go:130] > # 	"image_pulls_success_total",
	I1225 12:51:47.892165 1466525 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1225 12:51:47.892172 1466525 command_runner.go:130] > # 	"containers_oom_count_total",
	I1225 12:51:47.892175 1466525 command_runner.go:130] > # ]
	I1225 12:51:47.892183 1466525 command_runner.go:130] > # The port on which the metrics server will listen.
	I1225 12:51:47.892187 1466525 command_runner.go:130] > # metrics_port = 9090
	I1225 12:51:47.892197 1466525 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1225 12:51:47.892205 1466525 command_runner.go:130] > # metrics_socket = ""
	I1225 12:51:47.892216 1466525 command_runner.go:130] > # The certificate for the secure metrics server.
	I1225 12:51:47.892229 1466525 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1225 12:51:47.892244 1466525 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1225 12:51:47.892255 1466525 command_runner.go:130] > # certificate on any modification event.
	I1225 12:51:47.892265 1466525 command_runner.go:130] > # metrics_cert = ""
	I1225 12:51:47.892276 1466525 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1225 12:51:47.892287 1466525 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1225 12:51:47.892297 1466525 command_runner.go:130] > # metrics_key = ""
	I1225 12:51:47.892308 1466525 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1225 12:51:47.892318 1466525 command_runner.go:130] > [crio.tracing]
	I1225 12:51:47.892329 1466525 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1225 12:51:47.892339 1466525 command_runner.go:130] > # enable_tracing = false
	I1225 12:51:47.892347 1466525 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1225 12:51:47.892353 1466525 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1225 12:51:47.892358 1466525 command_runner.go:130] > # Number of samples to collect per million spans.
	I1225 12:51:47.892366 1466525 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1225 12:51:47.892372 1466525 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1225 12:51:47.892378 1466525 command_runner.go:130] > [crio.stats]
	I1225 12:51:47.892384 1466525 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1225 12:51:47.892392 1466525 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1225 12:51:47.892399 1466525 command_runner.go:130] > # stats_collection_period = 0
	I1225 12:51:47.892526 1466525 cni.go:84] Creating CNI manager for ""
	I1225 12:51:47.892542 1466525 cni.go:136] 3 nodes found, recommending kindnet
	I1225 12:51:47.892553 1466525 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 12:51:47.892575 1466525 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-544936 NodeName:multinode-544936-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 12:51:47.892686 1466525 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-544936-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 12:51:47.892755 1466525 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-544936-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 12:51:47.892812 1466525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 12:51:47.902269 1466525 command_runner.go:130] > kubeadm
	I1225 12:51:47.902290 1466525 command_runner.go:130] > kubectl
	I1225 12:51:47.902294 1466525 command_runner.go:130] > kubelet
	I1225 12:51:47.902565 1466525 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 12:51:47.902639 1466525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1225 12:51:47.911977 1466525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1225 12:51:47.929326 1466525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 12:51:47.946094 1466525 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I1225 12:51:47.950214 1466525 command_runner.go:130] > 192.168.39.21	control-plane.minikube.internal
	I1225 12:51:47.950512 1466525 host.go:66] Checking if "multinode-544936" exists ...
	I1225 12:51:47.950761 1466525 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:51:47.950928 1466525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:51:47.950961 1466525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:51:47.966111 1466525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
	I1225 12:51:47.966687 1466525 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:51:47.967183 1466525 main.go:141] libmachine: Using API Version  1
	I1225 12:51:47.967215 1466525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:51:47.967521 1466525 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:51:47.967733 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:51:47.967916 1466525 start.go:304] JoinCluster: &{Name:multinode-544936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.54 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:51:47.968066 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1225 12:51:47.968090 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:51:47.970734 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:51:47.971136 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:51:47.971160 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:51:47.971263 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:51:47.971430 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:51:47.971563 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:51:47.971729 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:51:48.161144 1466525 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token iv38w2.a9geqt5otu5mvep1 --discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 12:51:48.163239 1466525 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.205 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1225 12:51:48.163287 1466525 host.go:66] Checking if "multinode-544936" exists ...
	I1225 12:51:48.163637 1466525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:51:48.163674 1466525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:51:48.178851 1466525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I1225 12:51:48.179359 1466525 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:51:48.179909 1466525 main.go:141] libmachine: Using API Version  1
	I1225 12:51:48.179939 1466525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:51:48.180271 1466525 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:51:48.180497 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:51:48.180722 1466525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-544936-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1225 12:51:48.180752 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:51:48.183606 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:51:48.184067 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:51:48.184095 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:51:48.184257 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:51:48.184464 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:51:48.184598 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:51:48.184763 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:51:48.394309 1466525 command_runner.go:130] > node/multinode-544936-m02 cordoned
	I1225 12:51:51.444194 1466525 command_runner.go:130] > pod "busybox-5bc68d56bd-z5f74" has DeletionTimestamp older than 1 seconds, skipping
	I1225 12:51:51.444238 1466525 command_runner.go:130] > node/multinode-544936-m02 drained
	I1225 12:51:51.446051 1466525 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1225 12:51:51.446079 1466525 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-mjlfm, kube-system/kube-proxy-7z5x6
	I1225 12:51:51.446116 1466525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-544936-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.265358124s)
	I1225 12:51:51.446135 1466525 node.go:108] successfully drained node "m02"
	I1225 12:51:51.446568 1466525 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:51:51.446856 1466525 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:51:51.447277 1466525 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1225 12:51:51.447354 1466525 round_trippers.go:463] DELETE https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:51:51.447366 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:51.447379 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:51.447393 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:51.447407 1466525 round_trippers.go:473]     Content-Type: application/json
	I1225 12:51:51.460101 1466525 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1225 12:51:51.460131 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:51.460142 1466525 round_trippers.go:580]     Audit-Id: 3d99a3f7-970f-4645-9355-bdea6fe9e018
	I1225 12:51:51.460151 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:51.460159 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:51.460167 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:51.460176 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:51.460185 1466525 round_trippers.go:580]     Content-Length: 171
	I1225 12:51:51.460201 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:51 GMT
	I1225 12:51:51.460239 1466525 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-544936-m02","kind":"nodes","uid":"9d9aae71-8bf8-4c71-a121-4b808f94d6e0"}}
	I1225 12:51:51.460280 1466525 node.go:124] successfully deleted node "m02"
	I1225 12:51:51.460302 1466525 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.205 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1225 12:51:51.460331 1466525 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.205 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1225 12:51:51.460358 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iv38w2.a9geqt5otu5mvep1 --discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-544936-m02"
	I1225 12:51:51.524072 1466525 command_runner.go:130] ! W1225 12:51:51.515941    2645 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1225 12:51:51.524328 1466525 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1225 12:51:51.681114 1466525 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1225 12:51:51.681146 1466525 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1225 12:51:52.483122 1466525 command_runner.go:130] > [preflight] Running pre-flight checks
	I1225 12:51:52.483157 1466525 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1225 12:51:52.483172 1466525 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1225 12:51:52.483185 1466525 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 12:51:52.483197 1466525 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 12:51:52.483207 1466525 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1225 12:51:52.483217 1466525 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1225 12:51:52.483225 1466525 command_runner.go:130] > This node has joined the cluster:
	I1225 12:51:52.483236 1466525 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1225 12:51:52.483249 1466525 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1225 12:51:52.483263 1466525 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1225 12:51:52.483292 1466525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iv38w2.a9geqt5otu5mvep1 --discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-544936-m02": (1.022913512s)
	I1225 12:51:52.483319 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1225 12:51:52.775948 1466525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da minikube.k8s.io/name=multinode-544936 minikube.k8s.io/updated_at=2023_12_25T12_51_52_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:51:52.874235 1466525 command_runner.go:130] > node/multinode-544936-m02 labeled
	I1225 12:51:52.885811 1466525 command_runner.go:130] > node/multinode-544936-m03 labeled
	I1225 12:51:52.890230 1466525 start.go:306] JoinCluster complete in 4.922309927s
	I1225 12:51:52.890267 1466525 cni.go:84] Creating CNI manager for ""
	I1225 12:51:52.890274 1466525 cni.go:136] 3 nodes found, recommending kindnet
	I1225 12:51:52.890353 1466525 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1225 12:51:52.896980 1466525 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1225 12:51:52.897021 1466525 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1225 12:51:52.897032 1466525 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1225 12:51:52.897042 1466525 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1225 12:51:52.897051 1466525 command_runner.go:130] > Access: 2023-12-25 12:49:25.300350221 +0000
	I1225 12:51:52.897060 1466525 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I1225 12:51:52.897067 1466525 command_runner.go:130] > Change: 2023-12-25 12:49:23.350350221 +0000
	I1225 12:51:52.897073 1466525 command_runner.go:130] >  Birth: -
	I1225 12:51:52.897147 1466525 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1225 12:51:52.897157 1466525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1225 12:51:52.919500 1466525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1225 12:51:53.290473 1466525 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1225 12:51:53.294388 1466525 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1225 12:51:53.297108 1466525 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1225 12:51:53.310067 1466525 command_runner.go:130] > daemonset.apps/kindnet configured
	I1225 12:51:53.313459 1466525 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:51:53.313691 1466525 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:51:53.314010 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1225 12:51:53.314023 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.314031 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.314037 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.316394 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:51:53.316417 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.316426 1466525 round_trippers.go:580]     Audit-Id: cde2be4a-3bab-4af7-a01b-708e2e6a509b
	I1225 12:51:53.316434 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.316443 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.316453 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.316469 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.316478 1466525 round_trippers.go:580]     Content-Length: 291
	I1225 12:51:53.316490 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.316706 1466525 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1deabb96-9bfd-47c0-8cbc-978c4199f86b","resourceVersion":"883","creationTimestamp":"2023-12-25T12:39:31Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1225 12:51:53.316806 1466525 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-544936" context rescaled to 1 replicas
	I1225 12:51:53.316836 1466525 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.205 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1225 12:51:53.318717 1466525 out.go:177] * Verifying Kubernetes components...
	I1225 12:51:53.320260 1466525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:51:53.334369 1466525 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:51:53.334631 1466525 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:51:53.334869 1466525 node_ready.go:35] waiting up to 6m0s for node "multinode-544936-m02" to be "Ready" ...
	I1225 12:51:53.334943 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:51:53.334950 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.334958 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.334965 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.337259 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:51:53.337273 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.337282 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.337296 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.337305 1466525 round_trippers.go:580]     Audit-Id: b4edb137-d943-4250-9c98-9a0571d17d62
	I1225 12:51:53.337312 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.337322 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.337334 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.337462 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"b32a0af7-ee24-4bb7-b481-19b822376a8d","resourceVersion":"1027","creationTimestamp":"2023-12-25T12:51:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_51_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:51:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1225 12:51:53.337854 1466525 node_ready.go:49] node "multinode-544936-m02" has status "Ready":"True"
	I1225 12:51:53.337871 1466525 node_ready.go:38] duration metric: took 2.986702ms waiting for node "multinode-544936-m02" to be "Ready" ...
	I1225 12:51:53.337883 1466525 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:51:53.337967 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:51:53.337977 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.337988 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.337998 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.341894 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:51:53.341916 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.341926 1466525 round_trippers.go:580]     Audit-Id: acaecd30-c718-4912-921f-f62b563e26a5
	I1225 12:51:53.341934 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.341941 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.341949 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.341962 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.341971 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.343654 1466525 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1035"},"items":[{"metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"864","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82198 chars]
	I1225 12:51:53.346077 1466525 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:53.346154 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:51:53.346163 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.346176 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.346182 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.349278 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:51:53.349297 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.349306 1466525 round_trippers.go:580]     Audit-Id: e1b46fae-a550-48f3-9b59-6a8aef4fade0
	I1225 12:51:53.349316 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.349324 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.349332 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.349339 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.349347 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.350073 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"864","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1225 12:51:53.350629 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:51:53.350645 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.350653 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.350659 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.354793 1466525 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:51:53.354809 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.354815 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.354821 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.354826 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.354831 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.354836 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.354841 1466525 round_trippers.go:580]     Audit-Id: d4aa009c-afe8-4037-b503-55266c96773b
	I1225 12:51:53.355558 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1225 12:51:53.355898 1466525 pod_ready.go:92] pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace has status "Ready":"True"
	I1225 12:51:53.355919 1466525 pod_ready.go:81] duration metric: took 9.820654ms waiting for pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:53.355928 1466525 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:53.355996 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:51:53.356007 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.356014 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.356021 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.360063 1466525 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:51:53.360080 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.360089 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.360098 1466525 round_trippers.go:580]     Audit-Id: 8dda0503-bfbe-4522-a6ad-d874f4d94878
	I1225 12:51:53.360104 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.360112 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.360120 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.360129 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.360944 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"884","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1225 12:51:53.361343 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:51:53.361365 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.361375 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.361385 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.366187 1466525 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:51:53.366207 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.366216 1466525 round_trippers.go:580]     Audit-Id: 4a146f7e-cfce-46ef-81ac-e640d6bccd21
	I1225 12:51:53.366224 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.366232 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.366263 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.366276 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.366284 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.367286 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1225 12:51:53.367703 1466525 pod_ready.go:92] pod "etcd-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:51:53.367728 1466525 pod_ready.go:81] duration metric: took 11.793345ms waiting for pod "etcd-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:53.367752 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:53.367833 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-544936
	I1225 12:51:53.367844 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.367855 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.367876 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.371574 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:51:53.371594 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.371604 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.371613 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.371621 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.371629 1466525 round_trippers.go:580]     Audit-Id: fcf10ecf-ba9d-4495-be9e-7d9b4f138e61
	I1225 12:51:53.371639 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.371647 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.371807 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-544936","namespace":"kube-system","uid":"d0fda9c8-27cf-4ecc-b379-39745cb7ec19","resourceVersion":"874","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.21:8443","kubernetes.io/config.hash":"b7cd9addac4657510db86c61386c4e6f","kubernetes.io/config.mirror":"b7cd9addac4657510db86c61386c4e6f","kubernetes.io/config.seen":"2023-12-25T12:39:31.216607492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1225 12:51:53.372336 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:51:53.372353 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.372364 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.372374 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.376655 1466525 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:51:53.376681 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.376690 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.376696 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.376710 1466525 round_trippers.go:580]     Audit-Id: 705f37a6-16df-4aac-afea-19eb0562a877
	I1225 12:51:53.376718 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.376725 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.376737 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.376926 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1225 12:51:53.377280 1466525 pod_ready.go:92] pod "kube-apiserver-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:51:53.377296 1466525 pod_ready.go:81] duration metric: took 9.533314ms waiting for pod "kube-apiserver-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:53.377305 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:53.377367 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-544936
	I1225 12:51:53.377376 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.377383 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.377389 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.380427 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:51:53.380448 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.380454 1466525 round_trippers.go:580]     Audit-Id: 0f5a85ca-b7a0-4302-9adb-0b2bb30f25f8
	I1225 12:51:53.380460 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.380465 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.380470 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.380479 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.380484 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.380971 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-544936","namespace":"kube-system","uid":"e8837ba4-e0a0-4bec-a702-df5e7e9ce1c0","resourceVersion":"858","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dcbd1114ea0bb0064cc87c1b2d706f29","kubernetes.io/config.mirror":"dcbd1114ea0bb0064cc87c1b2d706f29","kubernetes.io/config.seen":"2023-12-25T12:39:31.216608577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1225 12:51:53.381492 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:51:53.381509 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.381519 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.381528 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.385140 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:51:53.385158 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.385165 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.385172 1466525 round_trippers.go:580]     Audit-Id: 7dd97f7f-4752-4eda-9759-53a541411dd3
	I1225 12:51:53.385181 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.385190 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.385199 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.385213 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.385406 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1225 12:51:53.385785 1466525 pod_ready.go:92] pod "kube-controller-manager-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:51:53.385803 1466525 pod_ready.go:81] duration metric: took 8.489999ms waiting for pod "kube-controller-manager-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:53.385816 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7z5x6" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:53.535140 1466525 request.go:629] Waited for 149.231623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z5x6
	I1225 12:51:53.535204 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z5x6
	I1225 12:51:53.535209 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.535218 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.535224 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.538621 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:51:53.538647 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.538657 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.538665 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.538672 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.538680 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.538689 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.538697 1466525 round_trippers.go:580]     Audit-Id: 8b1de34d-190e-408a-a532-3952b33af2fe
	I1225 12:51:53.538977 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7z5x6","generateName":"kube-proxy-","namespace":"kube-system","uid":"304c848e-4ecf-433d-a17d-b1b33784ae08","resourceVersion":"1030","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I1225 12:51:53.736005 1466525 request.go:629] Waited for 196.3601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:51:53.736072 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:51:53.736078 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.736101 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.736114 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.739274 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:51:53.739296 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.739303 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.739308 1466525 round_trippers.go:580]     Audit-Id: 82c49c7b-525c-4358-8e60-090f42b6c4e2
	I1225 12:51:53.739316 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.739321 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.739326 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.739333 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.739685 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"b32a0af7-ee24-4bb7-b481-19b822376a8d","resourceVersion":"1027","creationTimestamp":"2023-12-25T12:51:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_51_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:51:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1225 12:51:53.935590 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z5x6
	I1225 12:51:53.935634 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:53.935647 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:53.935658 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:53.943420 1466525 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1225 12:51:53.943447 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:53.943456 1466525 round_trippers.go:580]     Audit-Id: 37d102dd-6be5-4bd8-a370-6e98bcf7a8d1
	I1225 12:51:53.943462 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:53.943468 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:53.943473 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:53.943478 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:53.943484 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:53 GMT
	I1225 12:51:53.943724 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7z5x6","generateName":"kube-proxy-","namespace":"kube-system","uid":"304c848e-4ecf-433d-a17d-b1b33784ae08","resourceVersion":"1030","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I1225 12:51:54.135393 1466525 request.go:629] Waited for 191.020735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:51:54.135458 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:51:54.135462 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:54.135470 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:54.135477 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:54.137879 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:51:54.137901 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:54.137911 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:54 GMT
	I1225 12:51:54.137919 1466525 round_trippers.go:580]     Audit-Id: 2824a330-8bdf-4ab1-843b-8ff74ee744b9
	I1225 12:51:54.137926 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:54.137934 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:54.137941 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:54.137948 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:54.138047 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"b32a0af7-ee24-4bb7-b481-19b822376a8d","resourceVersion":"1027","creationTimestamp":"2023-12-25T12:51:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_51_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:51:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1225 12:51:54.386534 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z5x6
	I1225 12:51:54.386560 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:54.386569 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:54.386588 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:54.389465 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:51:54.389486 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:54.389493 1466525 round_trippers.go:580]     Audit-Id: 082c0168-6811-4975-9382-793b4191c21b
	I1225 12:51:54.389499 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:54.389506 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:54.389514 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:54.389522 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:54.389530 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:54 GMT
	I1225 12:51:54.389795 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7z5x6","generateName":"kube-proxy-","namespace":"kube-system","uid":"304c848e-4ecf-433d-a17d-b1b33784ae08","resourceVersion":"1046","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1225 12:51:54.535625 1466525 request.go:629] Waited for 145.362228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:51:54.535702 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:51:54.535708 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:54.535716 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:54.535725 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:54.538702 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:51:54.538722 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:54.538730 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:54.538736 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:54 GMT
	I1225 12:51:54.538744 1466525 round_trippers.go:580]     Audit-Id: bb2d6c21-1f33-40bb-8c5b-a9dd07b776d4
	I1225 12:51:54.538752 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:54.538767 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:54.538777 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:54.539254 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"b32a0af7-ee24-4bb7-b481-19b822376a8d","resourceVersion":"1027","creationTimestamp":"2023-12-25T12:51:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_51_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:51:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1225 12:51:54.539604 1466525 pod_ready.go:92] pod "kube-proxy-7z5x6" in "kube-system" namespace has status "Ready":"True"
	I1225 12:51:54.539626 1466525 pod_ready.go:81] duration metric: took 1.153800885s waiting for pod "kube-proxy-7z5x6" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:54.539636 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gkxgw" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:54.735024 1466525 request.go:629] Waited for 195.313691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkxgw
	I1225 12:51:54.735108 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkxgw
	I1225 12:51:54.735113 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:54.735122 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:54.735135 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:54.737818 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:51:54.737846 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:54.737853 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:54.737859 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:54.737866 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:54.737871 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:54 GMT
	I1225 12:51:54.737878 1466525 round_trippers.go:580]     Audit-Id: 2372a425-5fa8-426a-88f6-af0984e2ec0d
	I1225 12:51:54.737886 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:54.738020 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gkxgw","generateName":"kube-proxy-","namespace":"kube-system","uid":"d14fbb1d-1200-463f-bd2b-17943371448c","resourceVersion":"714","creationTimestamp":"2023-12-25T12:41:20Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1225 12:51:54.936004 1466525 request.go:629] Waited for 197.398005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m03
	I1225 12:51:54.936135 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m03
	I1225 12:51:54.936147 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:54.936159 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:54.936170 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:54.939044 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:51:54.939073 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:54.939083 1466525 round_trippers.go:580]     Audit-Id: 02698f4f-2067-4efa-b31f-76472d72c3b2
	I1225 12:51:54.939090 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:54.939097 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:54.939104 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:54.939111 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:54.939118 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:54 GMT
	I1225 12:51:54.939312 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m03","uid":"3744762d-9d11-4193-82ab-cd70245fefca","resourceVersion":"1028","creationTimestamp":"2023-12-25T12:42:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_51_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:42:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3965 chars]
	I1225 12:51:54.939628 1466525 pod_ready.go:92] pod "kube-proxy-gkxgw" in "kube-system" namespace has status "Ready":"True"
	I1225 12:51:54.939647 1466525 pod_ready.go:81] duration metric: took 400.001777ms waiting for pod "kube-proxy-gkxgw" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:54.939660 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4jc7" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:55.135645 1466525 request.go:629] Waited for 195.892095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4jc7
	I1225 12:51:55.135724 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4jc7
	I1225 12:51:55.135729 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:55.135737 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:55.135744 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:55.138544 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:51:55.138641 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:55.138665 1466525 round_trippers.go:580]     Audit-Id: fd672c4c-529d-4937-b274-d2fb7d6f8522
	I1225 12:51:55.138681 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:55.138690 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:55.138700 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:55.138714 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:55.138724 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:55 GMT
	I1225 12:51:55.138847 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k4jc7","generateName":"kube-proxy-","namespace":"kube-system","uid":"14699a0d-601b-4bc3-9584-7ac67822a926","resourceVersion":"790","creationTimestamp":"2023-12-25T12:39:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1225 12:51:55.335771 1466525 request.go:629] Waited for 196.454179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:51:55.335833 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:51:55.335838 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:55.335846 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:55.335853 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:55.338499 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:51:55.338525 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:55.338535 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:55.338543 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:55.338550 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:55.338558 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:55.338571 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:55 GMT
	I1225 12:51:55.338580 1466525 round_trippers.go:580]     Audit-Id: 7b19157d-f6db-4811-ad03-29acbebae367
	I1225 12:51:55.338799 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1225 12:51:55.339163 1466525 pod_ready.go:92] pod "kube-proxy-k4jc7" in "kube-system" namespace has status "Ready":"True"
	I1225 12:51:55.339180 1466525 pod_ready.go:81] duration metric: took 399.513213ms waiting for pod "kube-proxy-k4jc7" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:55.339190 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:55.535075 1466525 request.go:629] Waited for 195.811492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-544936
	I1225 12:51:55.535178 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-544936
	I1225 12:51:55.535190 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:55.535203 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:55.535222 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:55.537766 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:51:55.537793 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:55.537804 1466525 round_trippers.go:580]     Audit-Id: ff22093e-c770-468f-a029-a2c90be5bc3e
	I1225 12:51:55.537813 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:55.537820 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:55.537831 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:55.537846 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:55.537853 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:55 GMT
	I1225 12:51:55.538118 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-544936","namespace":"kube-system","uid":"e8027489-26d3-44c3-aeea-286e6689e75e","resourceVersion":"876","creationTimestamp":"2023-12-25T12:39:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0d8721061e771e9dc39fa5394fc12b4b","kubernetes.io/config.mirror":"0d8721061e771e9dc39fa5394fc12b4b","kubernetes.io/config.seen":"2023-12-25T12:39:22.819404471Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1225 12:51:55.736056 1466525 request.go:629] Waited for 197.366811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:51:55.736136 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:51:55.736142 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:55.736153 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:55.736163 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:55.739148 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:51:55.739177 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:55.739188 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:55.739196 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:55.739204 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:55.739212 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:55.739228 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:55 GMT
	I1225 12:51:55.739235 1466525 round_trippers.go:580]     Audit-Id: 64641019-0afc-4e7f-b2eb-b09688d7c1ef
	I1225 12:51:55.740008 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1225 12:51:55.740379 1466525 pod_ready.go:92] pod "kube-scheduler-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:51:55.740399 1466525 pod_ready.go:81] duration metric: took 401.201994ms waiting for pod "kube-scheduler-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:51:55.740409 1466525 pod_ready.go:38] duration metric: took 2.402511734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:51:55.740465 1466525 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 12:51:55.740549 1466525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:51:55.754080 1466525 system_svc.go:56] duration metric: took 13.609946ms WaitForService to wait for kubelet.
	I1225 12:51:55.754112 1466525 kubeadm.go:581] duration metric: took 2.437247788s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 12:51:55.754142 1466525 node_conditions.go:102] verifying NodePressure condition ...
	I1225 12:51:55.935634 1466525 request.go:629] Waited for 181.390755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes
	I1225 12:51:55.935696 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes
	I1225 12:51:55.935701 1466525 round_trippers.go:469] Request Headers:
	I1225 12:51:55.935709 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:51:55.935725 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:51:55.939038 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:51:55.939069 1466525 round_trippers.go:577] Response Headers:
	I1225 12:51:55.939080 1466525 round_trippers.go:580]     Audit-Id: 31b36266-3fc2-4348-9a89-affb2d82b905
	I1225 12:51:55.939089 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:51:55.939097 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:51:55.939104 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:51:55.939112 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:51:55.939122 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:51:55 GMT
	I1225 12:51:55.939422 1466525 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1051"},"items":[{"metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16209 chars]
	I1225 12:51:55.940136 1466525 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:51:55.940169 1466525 node_conditions.go:123] node cpu capacity is 2
	I1225 12:51:55.940181 1466525 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:51:55.940186 1466525 node_conditions.go:123] node cpu capacity is 2
	I1225 12:51:55.940189 1466525 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:51:55.940197 1466525 node_conditions.go:123] node cpu capacity is 2
	I1225 12:51:55.940200 1466525 node_conditions.go:105] duration metric: took 186.053075ms to run NodePressure ...
	I1225 12:51:55.940218 1466525 start.go:228] waiting for startup goroutines ...
	I1225 12:51:55.940240 1466525 start.go:242] writing updated cluster config ...
	I1225 12:51:55.940682 1466525 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:51:55.940776 1466525 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/config.json ...
	I1225 12:51:55.943406 1466525 out.go:177] * Starting worker node multinode-544936-m03 in cluster multinode-544936
	I1225 12:51:55.944740 1466525 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 12:51:55.944772 1466525 cache.go:56] Caching tarball of preloaded images
	I1225 12:51:55.944888 1466525 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 12:51:55.944903 1466525 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1225 12:51:55.945004 1466525 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/config.json ...
	I1225 12:51:55.945231 1466525 start.go:365] acquiring machines lock for multinode-544936-m03: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 12:51:55.945298 1466525 start.go:369] acquired machines lock for "multinode-544936-m03" in 44.358µs
	I1225 12:51:55.945321 1466525 start.go:96] Skipping create...Using existing machine configuration
	I1225 12:51:55.945331 1466525 fix.go:54] fixHost starting: m03
	I1225 12:51:55.945598 1466525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:51:55.945627 1466525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:51:55.960787 1466525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I1225 12:51:55.961269 1466525 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:51:55.961717 1466525 main.go:141] libmachine: Using API Version  1
	I1225 12:51:55.961748 1466525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:51:55.962089 1466525 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:51:55.962297 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .DriverName
	I1225 12:51:55.962482 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetState
	I1225 12:51:55.964081 1466525 fix.go:102] recreateIfNeeded on multinode-544936-m03: state=Running err=<nil>
	W1225 12:51:55.964102 1466525 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 12:51:55.966214 1466525 out.go:177] * Updating the running kvm2 "multinode-544936-m03" VM ...
	I1225 12:51:55.967689 1466525 machine.go:88] provisioning docker machine ...
	I1225 12:51:55.967719 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .DriverName
	I1225 12:51:55.967947 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetMachineName
	I1225 12:51:55.968178 1466525 buildroot.go:166] provisioning hostname "multinode-544936-m03"
	I1225 12:51:55.968202 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetMachineName
	I1225 12:51:55.968356 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHHostname
	I1225 12:51:55.970771 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:51:55.971236 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:05:65", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:41:55 +0000 UTC Type:0 Mac:52:54:00:25:05:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-544936-m03 Clientid:01:52:54:00:25:05:65}
	I1225 12:51:55.971260 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:51:55.971415 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHPort
	I1225 12:51:55.971601 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHKeyPath
	I1225 12:51:55.971761 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHKeyPath
	I1225 12:51:55.971939 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHUsername
	I1225 12:51:55.972134 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:51:55.972511 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I1225 12:51:55.972526 1466525 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-544936-m03 && echo "multinode-544936-m03" | sudo tee /etc/hostname
	I1225 12:51:56.125307 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-544936-m03
	
	I1225 12:51:56.125338 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHHostname
	I1225 12:51:56.128312 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:51:56.128691 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:05:65", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:41:55 +0000 UTC Type:0 Mac:52:54:00:25:05:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-544936-m03 Clientid:01:52:54:00:25:05:65}
	I1225 12:51:56.128724 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:51:56.128933 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHPort
	I1225 12:51:56.129168 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHKeyPath
	I1225 12:51:56.129369 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHKeyPath
	I1225 12:51:56.129517 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHUsername
	I1225 12:51:56.129743 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:51:56.130160 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I1225 12:51:56.130190 1466525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-544936-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-544936-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-544936-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 12:51:56.259384 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 12:51:56.259430 1466525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 12:51:56.259456 1466525 buildroot.go:174] setting up certificates
	I1225 12:51:56.259466 1466525 provision.go:83] configureAuth start
	I1225 12:51:56.259479 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetMachineName
	I1225 12:51:56.259825 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetIP
	I1225 12:51:56.262501 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:51:56.262927 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:05:65", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:41:55 +0000 UTC Type:0 Mac:52:54:00:25:05:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-544936-m03 Clientid:01:52:54:00:25:05:65}
	I1225 12:51:56.262948 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:51:56.263110 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHHostname
	I1225 12:51:56.265184 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:51:56.265444 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:05:65", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:41:55 +0000 UTC Type:0 Mac:52:54:00:25:05:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-544936-m03 Clientid:01:52:54:00:25:05:65}
	I1225 12:51:56.265467 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:51:56.265614 1466525 provision.go:138] copyHostCerts
	I1225 12:51:56.265651 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 12:51:56.265684 1466525 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 12:51:56.265693 1466525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 12:51:56.265805 1466525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 12:51:56.265911 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 12:51:56.265937 1466525 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 12:51:56.265947 1466525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 12:51:56.265988 1466525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 12:51:56.266049 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 12:51:56.266078 1466525 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 12:51:56.266087 1466525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 12:51:56.266121 1466525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 12:51:56.266184 1466525 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.multinode-544936-m03 san=[192.168.39.54 192.168.39.54 localhost 127.0.0.1 minikube multinode-544936-m03]
	I1225 12:51:56.479312 1466525 provision.go:172] copyRemoteCerts
	I1225 12:51:56.479375 1466525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 12:51:56.479403 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHHostname
	I1225 12:51:56.482131 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:51:56.482480 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:05:65", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:41:55 +0000 UTC Type:0 Mac:52:54:00:25:05:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-544936-m03 Clientid:01:52:54:00:25:05:65}
	I1225 12:51:56.482498 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:51:56.482669 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHPort
	I1225 12:51:56.482891 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHKeyPath
	I1225 12:51:56.483019 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHUsername
	I1225 12:51:56.483149 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m03/id_rsa Username:docker}
	I1225 12:51:56.579736 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1225 12:51:56.579810 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 12:51:56.605371 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1225 12:51:56.605527 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1225 12:51:56.629652 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1225 12:51:56.629754 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 12:51:56.653789 1466525 provision.go:86] duration metric: configureAuth took 394.308884ms
	I1225 12:51:56.653828 1466525 buildroot.go:189] setting minikube options for container-runtime
	I1225 12:51:56.654078 1466525 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:51:56.654174 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHHostname
	I1225 12:51:56.656994 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:51:56.657389 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:05:65", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:41:55 +0000 UTC Type:0 Mac:52:54:00:25:05:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-544936-m03 Clientid:01:52:54:00:25:05:65}
	I1225 12:51:56.657428 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:51:56.657551 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHPort
	I1225 12:51:56.657780 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHKeyPath
	I1225 12:51:56.657946 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHKeyPath
	I1225 12:51:56.658059 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHUsername
	I1225 12:51:56.658281 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:51:56.658630 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I1225 12:51:56.658647 1466525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 12:53:27.344177 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 12:53:27.344253 1466525 machine.go:91] provisioned docker machine in 1m31.376539314s
	I1225 12:53:27.344272 1466525 start.go:300] post-start starting for "multinode-544936-m03" (driver="kvm2")
	I1225 12:53:27.344286 1466525 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 12:53:27.344323 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .DriverName
	I1225 12:53:27.344697 1466525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 12:53:27.344738 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHHostname
	I1225 12:53:27.348169 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:53:27.348665 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:05:65", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:41:55 +0000 UTC Type:0 Mac:52:54:00:25:05:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-544936-m03 Clientid:01:52:54:00:25:05:65}
	I1225 12:53:27.348709 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:53:27.348889 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHPort
	I1225 12:53:27.349102 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHKeyPath
	I1225 12:53:27.349300 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHUsername
	I1225 12:53:27.349438 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m03/id_rsa Username:docker}
	I1225 12:53:27.446025 1466525 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 12:53:27.450389 1466525 command_runner.go:130] > NAME=Buildroot
	I1225 12:53:27.450416 1466525 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1225 12:53:27.450421 1466525 command_runner.go:130] > ID=buildroot
	I1225 12:53:27.450426 1466525 command_runner.go:130] > VERSION_ID=2021.02.12
	I1225 12:53:27.450438 1466525 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1225 12:53:27.450570 1466525 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 12:53:27.450594 1466525 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 12:53:27.450702 1466525 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 12:53:27.450848 1466525 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 12:53:27.450863 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> /etc/ssl/certs/14497972.pem
	I1225 12:53:27.450963 1466525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 12:53:27.461270 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 12:53:27.486224 1466525 start.go:303] post-start completed in 141.931542ms
	I1225 12:53:27.486254 1466525 fix.go:56] fixHost completed within 1m31.540925194s
	I1225 12:53:27.486283 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHHostname
	I1225 12:53:27.489434 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:53:27.489755 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:05:65", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:41:55 +0000 UTC Type:0 Mac:52:54:00:25:05:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-544936-m03 Clientid:01:52:54:00:25:05:65}
	I1225 12:53:27.489792 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:53:27.490043 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHPort
	I1225 12:53:27.490271 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHKeyPath
	I1225 12:53:27.490431 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHKeyPath
	I1225 12:53:27.490568 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHUsername
	I1225 12:53:27.490738 1466525 main.go:141] libmachine: Using SSH client type: native
	I1225 12:53:27.491050 1466525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I1225 12:53:27.491061 1466525 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 12:53:27.623387 1466525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703508807.615911734
	
	I1225 12:53:27.623419 1466525 fix.go:206] guest clock: 1703508807.615911734
	I1225 12:53:27.623428 1466525 fix.go:219] Guest: 2023-12-25 12:53:27.615911734 +0000 UTC Remote: 2023-12-25 12:53:27.486258371 +0000 UTC m=+553.463436894 (delta=129.653363ms)
	I1225 12:53:27.623452 1466525 fix.go:190] guest clock delta is within tolerance: 129.653363ms
	I1225 12:53:27.623458 1466525 start.go:83] releasing machines lock for "multinode-544936-m03", held for 1m31.678147435s
	I1225 12:53:27.623488 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .DriverName
	I1225 12:53:27.623771 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetIP
	I1225 12:53:27.626641 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:53:27.627021 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:05:65", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:41:55 +0000 UTC Type:0 Mac:52:54:00:25:05:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-544936-m03 Clientid:01:52:54:00:25:05:65}
	I1225 12:53:27.627057 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:53:27.629404 1466525 out.go:177] * Found network options:
	I1225 12:53:27.631257 1466525 out.go:177]   - NO_PROXY=192.168.39.21,192.168.39.205
	W1225 12:53:27.632943 1466525 proxy.go:119] fail to check proxy env: Error ip not in block
	W1225 12:53:27.632966 1466525 proxy.go:119] fail to check proxy env: Error ip not in block
	I1225 12:53:27.632981 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .DriverName
	I1225 12:53:27.633616 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .DriverName
	I1225 12:53:27.633835 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .DriverName
	I1225 12:53:27.633948 1466525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 12:53:27.633994 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHHostname
	W1225 12:53:27.634060 1466525 proxy.go:119] fail to check proxy env: Error ip not in block
	W1225 12:53:27.634087 1466525 proxy.go:119] fail to check proxy env: Error ip not in block
	I1225 12:53:27.634161 1466525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 12:53:27.634181 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHHostname
	I1225 12:53:27.636936 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:53:27.637140 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:53:27.637391 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:05:65", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:41:55 +0000 UTC Type:0 Mac:52:54:00:25:05:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-544936-m03 Clientid:01:52:54:00:25:05:65}
	I1225 12:53:27.637436 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:53:27.637550 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHPort
	I1225 12:53:27.637742 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:05:65", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:41:55 +0000 UTC Type:0 Mac:52:54:00:25:05:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-544936-m03 Clientid:01:52:54:00:25:05:65}
	I1225 12:53:27.637770 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:53:27.637789 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHKeyPath
	I1225 12:53:27.637919 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHPort
	I1225 12:53:27.638009 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHUsername
	I1225 12:53:27.638077 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHKeyPath
	I1225 12:53:27.638139 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m03/id_rsa Username:docker}
	I1225 12:53:27.638202 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetSSHUsername
	I1225 12:53:27.638311 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m03/id_rsa Username:docker}
	I1225 12:53:27.758318 1466525 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1225 12:53:27.884711 1466525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1225 12:53:27.890631 1466525 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1225 12:53:27.890891 1466525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 12:53:27.890969 1466525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 12:53:27.900434 1466525 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 12:53:27.900467 1466525 start.go:475] detecting cgroup driver to use...
	I1225 12:53:27.900549 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 12:53:27.916392 1466525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 12:53:27.929847 1466525 docker.go:203] disabling cri-docker service (if available) ...
	I1225 12:53:27.929923 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 12:53:27.946555 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 12:53:27.960841 1466525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 12:53:28.109668 1466525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 12:53:28.231207 1466525 docker.go:219] disabling docker service ...
	I1225 12:53:28.231294 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 12:53:28.245863 1466525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 12:53:28.259244 1466525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 12:53:28.379751 1466525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 12:53:28.515300 1466525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 12:53:28.528931 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 12:53:28.547307 1466525 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1225 12:53:28.547354 1466525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 12:53:28.547402 1466525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:53:28.557762 1466525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 12:53:28.557847 1466525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:53:28.568360 1466525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:53:28.579408 1466525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 12:53:28.590312 1466525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 12:53:28.602371 1466525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 12:53:28.612878 1466525 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1225 12:53:28.612994 1466525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 12:53:28.623465 1466525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 12:53:28.762167 1466525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 12:53:28.986315 1466525 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 12:53:28.986414 1466525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 12:53:28.992583 1466525 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1225 12:53:28.992621 1466525 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1225 12:53:28.992634 1466525 command_runner.go:130] > Device: 16h/22d	Inode: 1198        Links: 1
	I1225 12:53:28.992645 1466525 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1225 12:53:28.992654 1466525 command_runner.go:130] > Access: 2023-12-25 12:53:28.910204326 +0000
	I1225 12:53:28.992664 1466525 command_runner.go:130] > Modify: 2023-12-25 12:53:28.910204326 +0000
	I1225 12:53:28.992674 1466525 command_runner.go:130] > Change: 2023-12-25 12:53:28.910204326 +0000
	I1225 12:53:28.992678 1466525 command_runner.go:130] >  Birth: -
	I1225 12:53:28.993082 1466525 start.go:543] Will wait 60s for crictl version
	I1225 12:53:28.993151 1466525 ssh_runner.go:195] Run: which crictl
	I1225 12:53:28.996995 1466525 command_runner.go:130] > /usr/bin/crictl
	I1225 12:53:28.997376 1466525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 12:53:29.042937 1466525 command_runner.go:130] > Version:  0.1.0
	I1225 12:53:29.042972 1466525 command_runner.go:130] > RuntimeName:  cri-o
	I1225 12:53:29.042981 1466525 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1225 12:53:29.042990 1466525 command_runner.go:130] > RuntimeApiVersion:  v1
	I1225 12:53:29.044369 1466525 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 12:53:29.044464 1466525 ssh_runner.go:195] Run: crio --version
	I1225 12:53:29.112884 1466525 command_runner.go:130] > crio version 1.24.1
	I1225 12:53:29.112911 1466525 command_runner.go:130] > Version:          1.24.1
	I1225 12:53:29.112918 1466525 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1225 12:53:29.112923 1466525 command_runner.go:130] > GitTreeState:     dirty
	I1225 12:53:29.112929 1466525 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I1225 12:53:29.112934 1466525 command_runner.go:130] > GoVersion:        go1.19.9
	I1225 12:53:29.112938 1466525 command_runner.go:130] > Compiler:         gc
	I1225 12:53:29.112943 1466525 command_runner.go:130] > Platform:         linux/amd64
	I1225 12:53:29.112948 1466525 command_runner.go:130] > Linkmode:         dynamic
	I1225 12:53:29.112955 1466525 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1225 12:53:29.112959 1466525 command_runner.go:130] > SeccompEnabled:   true
	I1225 12:53:29.112964 1466525 command_runner.go:130] > AppArmorEnabled:  false
	I1225 12:53:29.113055 1466525 ssh_runner.go:195] Run: crio --version
	I1225 12:53:29.168106 1466525 command_runner.go:130] > crio version 1.24.1
	I1225 12:53:29.168134 1466525 command_runner.go:130] > Version:          1.24.1
	I1225 12:53:29.168149 1466525 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1225 12:53:29.168157 1466525 command_runner.go:130] > GitTreeState:     dirty
	I1225 12:53:29.168167 1466525 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I1225 12:53:29.168175 1466525 command_runner.go:130] > GoVersion:        go1.19.9
	I1225 12:53:29.168182 1466525 command_runner.go:130] > Compiler:         gc
	I1225 12:53:29.168192 1466525 command_runner.go:130] > Platform:         linux/amd64
	I1225 12:53:29.168202 1466525 command_runner.go:130] > Linkmode:         dynamic
	I1225 12:53:29.168218 1466525 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1225 12:53:29.168229 1466525 command_runner.go:130] > SeccompEnabled:   true
	I1225 12:53:29.168236 1466525 command_runner.go:130] > AppArmorEnabled:  false
	I1225 12:53:29.169998 1466525 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 12:53:29.171485 1466525 out.go:177]   - env NO_PROXY=192.168.39.21
	I1225 12:53:29.172892 1466525 out.go:177]   - env NO_PROXY=192.168.39.21,192.168.39.205
	I1225 12:53:29.174336 1466525 main.go:141] libmachine: (multinode-544936-m03) Calling .GetIP
	I1225 12:53:29.177142 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:53:29.177521 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:05:65", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:41:55 +0000 UTC Type:0 Mac:52:54:00:25:05:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-544936-m03 Clientid:01:52:54:00:25:05:65}
	I1225 12:53:29.177551 1466525 main.go:141] libmachine: (multinode-544936-m03) DBG | domain multinode-544936-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:25:05:65 in network mk-multinode-544936
	I1225 12:53:29.177728 1466525 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 12:53:29.182238 1466525 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1225 12:53:29.182345 1466525 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936 for IP: 192.168.39.54
	I1225 12:53:29.182378 1466525 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 12:53:29.182542 1466525 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 12:53:29.182621 1466525 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 12:53:29.182640 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1225 12:53:29.182658 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1225 12:53:29.182670 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1225 12:53:29.182681 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1225 12:53:29.182755 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 12:53:29.182790 1466525 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 12:53:29.182808 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 12:53:29.182842 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 12:53:29.182879 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 12:53:29.182911 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 12:53:29.182959 1466525 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 12:53:29.182986 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> /usr/share/ca-certificates/14497972.pem
	I1225 12:53:29.182999 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:53:29.183013 1466525 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem -> /usr/share/ca-certificates/1449797.pem
	I1225 12:53:29.183393 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 12:53:29.208796 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 12:53:29.231760 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 12:53:29.255681 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 12:53:29.279811 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 12:53:29.302717 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 12:53:29.326950 1466525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 12:53:29.350598 1466525 ssh_runner.go:195] Run: openssl version
	I1225 12:53:29.356687 1466525 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1225 12:53:29.356760 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 12:53:29.366510 1466525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 12:53:29.371126 1466525 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 12:53:29.371266 1466525 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 12:53:29.371328 1466525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 12:53:29.376694 1466525 command_runner.go:130] > 3ec20f2e
	I1225 12:53:29.377084 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 12:53:29.385610 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 12:53:29.395168 1466525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:53:29.399621 1466525 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:53:29.399897 1466525 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:53:29.399961 1466525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 12:53:29.405708 1466525 command_runner.go:130] > b5213941
	I1225 12:53:29.406080 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 12:53:29.414978 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 12:53:29.424914 1466525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 12:53:29.429473 1466525 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 12:53:29.429758 1466525 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 12:53:29.429848 1466525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 12:53:29.435531 1466525 command_runner.go:130] > 51391683
	I1225 12:53:29.435826 1466525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 12:53:29.444364 1466525 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 12:53:29.448527 1466525 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1225 12:53:29.448567 1466525 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1225 12:53:29.448648 1466525 ssh_runner.go:195] Run: crio config
	I1225 12:53:29.501659 1466525 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1225 12:53:29.501686 1466525 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1225 12:53:29.501696 1466525 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1225 12:53:29.501711 1466525 command_runner.go:130] > #
	I1225 12:53:29.501723 1466525 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1225 12:53:29.501733 1466525 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1225 12:53:29.501742 1466525 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1225 12:53:29.501753 1466525 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1225 12:53:29.501758 1466525 command_runner.go:130] > # reload'.
	I1225 12:53:29.501768 1466525 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1225 12:53:29.501778 1466525 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1225 12:53:29.501792 1466525 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1225 12:53:29.501804 1466525 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1225 12:53:29.501814 1466525 command_runner.go:130] > [crio]
	I1225 12:53:29.501821 1466525 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1225 12:53:29.501829 1466525 command_runner.go:130] > # containers images, in this directory.
	I1225 12:53:29.501837 1466525 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1225 12:53:29.501856 1466525 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1225 12:53:29.501869 1466525 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1225 12:53:29.501879 1466525 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1225 12:53:29.501893 1466525 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1225 12:53:29.502188 1466525 command_runner.go:130] > storage_driver = "overlay"
	I1225 12:53:29.502214 1466525 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1225 12:53:29.502224 1466525 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1225 12:53:29.502233 1466525 command_runner.go:130] > storage_option = [
	I1225 12:53:29.502370 1466525 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1225 12:53:29.502460 1466525 command_runner.go:130] > ]
	I1225 12:53:29.502478 1466525 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1225 12:53:29.502488 1466525 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1225 12:53:29.503085 1466525 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1225 12:53:29.503102 1466525 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1225 12:53:29.503112 1466525 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1225 12:53:29.503119 1466525 command_runner.go:130] > # always happen on a node reboot
	I1225 12:53:29.503128 1466525 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1225 12:53:29.503137 1466525 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1225 12:53:29.503144 1466525 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1225 12:53:29.503157 1466525 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1225 12:53:29.503257 1466525 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1225 12:53:29.503284 1466525 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1225 12:53:29.503299 1466525 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1225 12:53:29.503307 1466525 command_runner.go:130] > # internal_wipe = true
	I1225 12:53:29.503318 1466525 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1225 12:53:29.503333 1466525 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1225 12:53:29.503342 1466525 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1225 12:53:29.503353 1466525 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1225 12:53:29.503363 1466525 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1225 12:53:29.503373 1466525 command_runner.go:130] > [crio.api]
	I1225 12:53:29.503386 1466525 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1225 12:53:29.503399 1466525 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1225 12:53:29.503413 1466525 command_runner.go:130] > # IP address on which the stream server will listen.
	I1225 12:53:29.503422 1466525 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1225 12:53:29.503438 1466525 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1225 12:53:29.503451 1466525 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1225 12:53:29.503461 1466525 command_runner.go:130] > # stream_port = "0"
	I1225 12:53:29.503471 1466525 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1225 12:53:29.503484 1466525 command_runner.go:130] > # stream_enable_tls = false
	I1225 12:53:29.503496 1466525 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1225 12:53:29.503506 1466525 command_runner.go:130] > # stream_idle_timeout = ""
	I1225 12:53:29.503518 1466525 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1225 12:53:29.503533 1466525 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1225 12:53:29.503544 1466525 command_runner.go:130] > # minutes.
	I1225 12:53:29.503554 1466525 command_runner.go:130] > # stream_tls_cert = ""
	I1225 12:53:29.503570 1466525 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1225 12:53:29.503585 1466525 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1225 12:53:29.503596 1466525 command_runner.go:130] > # stream_tls_key = ""
	I1225 12:53:29.503608 1466525 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1225 12:53:29.503624 1466525 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1225 12:53:29.503637 1466525 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1225 12:53:29.503645 1466525 command_runner.go:130] > # stream_tls_ca = ""
	I1225 12:53:29.503658 1466525 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1225 12:53:29.503669 1466525 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1225 12:53:29.503686 1466525 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1225 12:53:29.503704 1466525 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1225 12:53:29.503733 1466525 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1225 12:53:29.503748 1466525 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1225 12:53:29.503758 1466525 command_runner.go:130] > [crio.runtime]
	I1225 12:53:29.503767 1466525 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1225 12:53:29.503779 1466525 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1225 12:53:29.503789 1466525 command_runner.go:130] > # "nofile=1024:2048"
	I1225 12:53:29.503802 1466525 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1225 12:53:29.503811 1466525 command_runner.go:130] > # default_ulimits = [
	I1225 12:53:29.503817 1466525 command_runner.go:130] > # ]
	I1225 12:53:29.503830 1466525 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1225 12:53:29.503840 1466525 command_runner.go:130] > # no_pivot = false
	I1225 12:53:29.503859 1466525 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1225 12:53:29.503872 1466525 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1225 12:53:29.503884 1466525 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1225 12:53:29.503896 1466525 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1225 12:53:29.503907 1466525 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1225 12:53:29.503922 1466525 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1225 12:53:29.503933 1466525 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1225 12:53:29.503948 1466525 command_runner.go:130] > # Cgroup setting for conmon
	I1225 12:53:29.503964 1466525 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1225 12:53:29.503975 1466525 command_runner.go:130] > conmon_cgroup = "pod"
	I1225 12:53:29.503992 1466525 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1225 12:53:29.504005 1466525 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1225 12:53:29.504017 1466525 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1225 12:53:29.504028 1466525 command_runner.go:130] > conmon_env = [
	I1225 12:53:29.504041 1466525 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1225 12:53:29.504050 1466525 command_runner.go:130] > ]
	I1225 12:53:29.504059 1466525 command_runner.go:130] > # Additional environment variables to set for all the
	I1225 12:53:29.504071 1466525 command_runner.go:130] > # containers. These are overridden if set in the
	I1225 12:53:29.504084 1466525 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1225 12:53:29.504093 1466525 command_runner.go:130] > # default_env = [
	I1225 12:53:29.504098 1466525 command_runner.go:130] > # ]
	I1225 12:53:29.504116 1466525 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1225 12:53:29.504127 1466525 command_runner.go:130] > # selinux = false
	I1225 12:53:29.504145 1466525 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1225 12:53:29.504160 1466525 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1225 12:53:29.504173 1466525 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1225 12:53:29.504183 1466525 command_runner.go:130] > # seccomp_profile = ""
	I1225 12:53:29.504194 1466525 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1225 12:53:29.504204 1466525 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1225 12:53:29.504221 1466525 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1225 12:53:29.504232 1466525 command_runner.go:130] > # which might increase security.
	I1225 12:53:29.504240 1466525 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1225 12:53:29.504255 1466525 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1225 12:53:29.504270 1466525 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1225 12:53:29.504284 1466525 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1225 12:53:29.504299 1466525 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1225 12:53:29.504311 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:53:29.504321 1466525 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1225 12:53:29.504333 1466525 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1225 12:53:29.504345 1466525 command_runner.go:130] > # the cgroup blockio controller.
	I1225 12:53:29.504356 1466525 command_runner.go:130] > # blockio_config_file = ""
	I1225 12:53:29.504370 1466525 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1225 12:53:29.504380 1466525 command_runner.go:130] > # irqbalance daemon.
	I1225 12:53:29.504390 1466525 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1225 12:53:29.504403 1466525 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1225 12:53:29.504416 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:53:29.504426 1466525 command_runner.go:130] > # rdt_config_file = ""
	I1225 12:53:29.504440 1466525 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1225 12:53:29.504451 1466525 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1225 12:53:29.504466 1466525 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1225 12:53:29.504477 1466525 command_runner.go:130] > # separate_pull_cgroup = ""
	I1225 12:53:29.504492 1466525 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1225 12:53:29.504506 1466525 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1225 12:53:29.504514 1466525 command_runner.go:130] > # will be added.
	I1225 12:53:29.504524 1466525 command_runner.go:130] > # default_capabilities = [
	I1225 12:53:29.504534 1466525 command_runner.go:130] > # 	"CHOWN",
	I1225 12:53:29.504541 1466525 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1225 12:53:29.504551 1466525 command_runner.go:130] > # 	"FSETID",
	I1225 12:53:29.504560 1466525 command_runner.go:130] > # 	"FOWNER",
	I1225 12:53:29.504570 1466525 command_runner.go:130] > # 	"SETGID",
	I1225 12:53:29.504581 1466525 command_runner.go:130] > # 	"SETUID",
	I1225 12:53:29.504592 1466525 command_runner.go:130] > # 	"SETPCAP",
	I1225 12:53:29.504599 1466525 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1225 12:53:29.504609 1466525 command_runner.go:130] > # 	"KILL",
	I1225 12:53:29.504614 1466525 command_runner.go:130] > # ]
	I1225 12:53:29.504627 1466525 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1225 12:53:29.504640 1466525 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1225 12:53:29.504647 1466525 command_runner.go:130] > # default_sysctls = [
	I1225 12:53:29.504656 1466525 command_runner.go:130] > # ]
	I1225 12:53:29.504664 1466525 command_runner.go:130] > # List of devices on the host that a
	I1225 12:53:29.504677 1466525 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1225 12:53:29.504687 1466525 command_runner.go:130] > # allowed_devices = [
	I1225 12:53:29.504697 1466525 command_runner.go:130] > # 	"/dev/fuse",
	I1225 12:53:29.504704 1466525 command_runner.go:130] > # ]
	I1225 12:53:29.504716 1466525 command_runner.go:130] > # List of additional devices. specified as
	I1225 12:53:29.504730 1466525 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1225 12:53:29.504742 1466525 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1225 12:53:29.504772 1466525 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1225 12:53:29.504782 1466525 command_runner.go:130] > # additional_devices = [
	I1225 12:53:29.504791 1466525 command_runner.go:130] > # ]
	I1225 12:53:29.504800 1466525 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1225 12:53:29.504815 1466525 command_runner.go:130] > # cdi_spec_dirs = [
	I1225 12:53:29.504821 1466525 command_runner.go:130] > # 	"/etc/cdi",
	I1225 12:53:29.504832 1466525 command_runner.go:130] > # 	"/var/run/cdi",
	I1225 12:53:29.504839 1466525 command_runner.go:130] > # ]
	I1225 12:53:29.504849 1466525 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1225 12:53:29.504862 1466525 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1225 12:53:29.504872 1466525 command_runner.go:130] > # Defaults to false.
	I1225 12:53:29.504880 1466525 command_runner.go:130] > # device_ownership_from_security_context = false
	I1225 12:53:29.504898 1466525 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1225 12:53:29.504912 1466525 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1225 12:53:29.504923 1466525 command_runner.go:130] > # hooks_dir = [
	I1225 12:53:29.504936 1466525 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1225 12:53:29.504946 1466525 command_runner.go:130] > # ]
	I1225 12:53:29.504961 1466525 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1225 12:53:29.504978 1466525 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1225 12:53:29.504991 1466525 command_runner.go:130] > # its default mounts from the following two files:
	I1225 12:53:29.505005 1466525 command_runner.go:130] > #
	I1225 12:53:29.505017 1466525 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1225 12:53:29.505029 1466525 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1225 12:53:29.505043 1466525 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1225 12:53:29.505053 1466525 command_runner.go:130] > #
	I1225 12:53:29.505064 1466525 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1225 12:53:29.505078 1466525 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1225 12:53:29.505091 1466525 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1225 12:53:29.505111 1466525 command_runner.go:130] > #      only add mounts it finds in this file.
	I1225 12:53:29.505121 1466525 command_runner.go:130] > #
	I1225 12:53:29.505129 1466525 command_runner.go:130] > # default_mounts_file = ""
	I1225 12:53:29.505141 1466525 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1225 12:53:29.505158 1466525 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1225 12:53:29.505168 1466525 command_runner.go:130] > pids_limit = 1024
	I1225 12:53:29.505179 1466525 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1225 12:53:29.505194 1466525 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1225 12:53:29.505210 1466525 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1225 12:53:29.505223 1466525 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1225 12:53:29.505233 1466525 command_runner.go:130] > # log_size_max = -1
	I1225 12:53:29.505245 1466525 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1225 12:53:29.505255 1466525 command_runner.go:130] > # log_to_journald = false
	I1225 12:53:29.505266 1466525 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1225 12:53:29.505278 1466525 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1225 12:53:29.505286 1466525 command_runner.go:130] > # Path to directory for container attach sockets.
	I1225 12:53:29.505297 1466525 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1225 12:53:29.505305 1466525 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1225 12:53:29.505311 1466525 command_runner.go:130] > # bind_mount_prefix = ""
	I1225 12:53:29.505324 1466525 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1225 12:53:29.505333 1466525 command_runner.go:130] > # read_only = false
	I1225 12:53:29.505344 1466525 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1225 12:53:29.505356 1466525 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1225 12:53:29.505367 1466525 command_runner.go:130] > # live configuration reload.
	I1225 12:53:29.505376 1466525 command_runner.go:130] > # log_level = "info"
	I1225 12:53:29.505384 1466525 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1225 12:53:29.505397 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:53:29.505409 1466525 command_runner.go:130] > # log_filter = ""
	I1225 12:53:29.505419 1466525 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1225 12:53:29.505429 1466525 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1225 12:53:29.505436 1466525 command_runner.go:130] > # separated by comma.
	I1225 12:53:29.505445 1466525 command_runner.go:130] > # uid_mappings = ""
	I1225 12:53:29.505455 1466525 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1225 12:53:29.505468 1466525 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1225 12:53:29.505477 1466525 command_runner.go:130] > # separated by comma.
	I1225 12:53:29.505483 1466525 command_runner.go:130] > # gid_mappings = ""
	I1225 12:53:29.505490 1466525 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1225 12:53:29.505498 1466525 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1225 12:53:29.505504 1466525 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1225 12:53:29.505509 1466525 command_runner.go:130] > # minimum_mappable_uid = -1
	I1225 12:53:29.505515 1466525 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1225 12:53:29.505523 1466525 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1225 12:53:29.505529 1466525 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1225 12:53:29.505534 1466525 command_runner.go:130] > # minimum_mappable_gid = -1
	I1225 12:53:29.505541 1466525 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1225 12:53:29.505549 1466525 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1225 12:53:29.505554 1466525 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1225 12:53:29.505560 1466525 command_runner.go:130] > # ctr_stop_timeout = 30
	I1225 12:53:29.505566 1466525 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1225 12:53:29.505574 1466525 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1225 12:53:29.505579 1466525 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1225 12:53:29.505584 1466525 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1225 12:53:29.505589 1466525 command_runner.go:130] > drop_infra_ctr = false
	I1225 12:53:29.505595 1466525 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1225 12:53:29.505602 1466525 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1225 12:53:29.505609 1466525 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1225 12:53:29.505616 1466525 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1225 12:53:29.505621 1466525 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1225 12:53:29.505629 1466525 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1225 12:53:29.505633 1466525 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1225 12:53:29.505643 1466525 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1225 12:53:29.505647 1466525 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1225 12:53:29.505655 1466525 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1225 12:53:29.505664 1466525 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1225 12:53:29.505670 1466525 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1225 12:53:29.505677 1466525 command_runner.go:130] > # default_runtime = "runc"
	I1225 12:53:29.505682 1466525 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1225 12:53:29.505688 1466525 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1225 12:53:29.505700 1466525 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1225 12:53:29.505707 1466525 command_runner.go:130] > # creation as a file is not desired either.
	I1225 12:53:29.505714 1466525 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1225 12:53:29.505722 1466525 command_runner.go:130] > # the hostname is being managed dynamically.
	I1225 12:53:29.505726 1466525 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1225 12:53:29.505730 1466525 command_runner.go:130] > # ]
	I1225 12:53:29.505737 1466525 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1225 12:53:29.505745 1466525 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1225 12:53:29.505751 1466525 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1225 12:53:29.505760 1466525 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1225 12:53:29.505768 1466525 command_runner.go:130] > #
	I1225 12:53:29.505772 1466525 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1225 12:53:29.505780 1466525 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1225 12:53:29.505789 1466525 command_runner.go:130] > #  runtime_type = "oci"
	I1225 12:53:29.505797 1466525 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1225 12:53:29.505807 1466525 command_runner.go:130] > #  privileged_without_host_devices = false
	I1225 12:53:29.505815 1466525 command_runner.go:130] > #  allowed_annotations = []
	I1225 12:53:29.505823 1466525 command_runner.go:130] > # Where:
	I1225 12:53:29.505832 1466525 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1225 12:53:29.505845 1466525 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1225 12:53:29.505857 1466525 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1225 12:53:29.505869 1466525 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1225 12:53:29.505879 1466525 command_runner.go:130] > #   in $PATH.
	I1225 12:53:29.505886 1466525 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1225 12:53:29.505893 1466525 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1225 12:53:29.505903 1466525 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1225 12:53:29.505909 1466525 command_runner.go:130] > #   state.
	I1225 12:53:29.505915 1466525 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1225 12:53:29.505924 1466525 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1225 12:53:29.505930 1466525 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1225 12:53:29.505937 1466525 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1225 12:53:29.505944 1466525 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1225 12:53:29.505953 1466525 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1225 12:53:29.505958 1466525 command_runner.go:130] > #   The currently recognized values are:
	I1225 12:53:29.505967 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1225 12:53:29.505974 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1225 12:53:29.505982 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1225 12:53:29.505988 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1225 12:53:29.505997 1466525 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1225 12:53:29.506004 1466525 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1225 12:53:29.506012 1466525 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1225 12:53:29.506019 1466525 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1225 12:53:29.506026 1466525 command_runner.go:130] > #   should be moved to the container's cgroup
	I1225 12:53:29.506032 1466525 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1225 12:53:29.506039 1466525 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1225 12:53:29.506044 1466525 command_runner.go:130] > runtime_type = "oci"
	I1225 12:53:29.506051 1466525 command_runner.go:130] > runtime_root = "/run/runc"
	I1225 12:53:29.506055 1466525 command_runner.go:130] > runtime_config_path = ""
	I1225 12:53:29.506061 1466525 command_runner.go:130] > monitor_path = ""
	I1225 12:53:29.506066 1466525 command_runner.go:130] > monitor_cgroup = ""
	I1225 12:53:29.506070 1466525 command_runner.go:130] > monitor_exec_cgroup = ""
	I1225 12:53:29.506077 1466525 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1225 12:53:29.506083 1466525 command_runner.go:130] > # running containers
	I1225 12:53:29.506088 1466525 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1225 12:53:29.506096 1466525 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1225 12:53:29.506164 1466525 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1225 12:53:29.506184 1466525 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1225 12:53:29.506190 1466525 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1225 12:53:29.506195 1466525 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1225 12:53:29.506200 1466525 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1225 12:53:29.506206 1466525 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1225 12:53:29.506211 1466525 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1225 12:53:29.506219 1466525 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1225 12:53:29.506225 1466525 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1225 12:53:29.506233 1466525 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1225 12:53:29.506241 1466525 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1225 12:53:29.506251 1466525 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1225 12:53:29.506260 1466525 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1225 12:53:29.506268 1466525 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1225 12:53:29.506277 1466525 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1225 12:53:29.506287 1466525 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1225 12:53:29.506293 1466525 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1225 12:53:29.506302 1466525 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1225 12:53:29.506306 1466525 command_runner.go:130] > # Example:
	I1225 12:53:29.506313 1466525 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1225 12:53:29.506331 1466525 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1225 12:53:29.506339 1466525 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1225 12:53:29.506344 1466525 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1225 12:53:29.506350 1466525 command_runner.go:130] > # cpuset = 0
	I1225 12:53:29.506354 1466525 command_runner.go:130] > # cpushares = "0-1"
	I1225 12:53:29.506361 1466525 command_runner.go:130] > # Where:
	I1225 12:53:29.506365 1466525 command_runner.go:130] > # The workload name is workload-type.
	I1225 12:53:29.506372 1466525 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1225 12:53:29.506379 1466525 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1225 12:53:29.506385 1466525 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1225 12:53:29.506395 1466525 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1225 12:53:29.506403 1466525 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1225 12:53:29.506406 1466525 command_runner.go:130] > # 
	I1225 12:53:29.506415 1466525 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1225 12:53:29.506420 1466525 command_runner.go:130] > #
	I1225 12:53:29.506426 1466525 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1225 12:53:29.506447 1466525 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1225 12:53:29.506458 1466525 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1225 12:53:29.506470 1466525 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1225 12:53:29.506477 1466525 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1225 12:53:29.506481 1466525 command_runner.go:130] > [crio.image]
	I1225 12:53:29.506490 1466525 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1225 12:53:29.506495 1466525 command_runner.go:130] > # default_transport = "docker://"
	I1225 12:53:29.506503 1466525 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1225 12:53:29.506510 1466525 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1225 12:53:29.506516 1466525 command_runner.go:130] > # global_auth_file = ""
	I1225 12:53:29.506522 1466525 command_runner.go:130] > # The image used to instantiate infra containers.
	I1225 12:53:29.506534 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:53:29.506539 1466525 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1225 12:53:29.506548 1466525 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1225 12:53:29.506554 1466525 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1225 12:53:29.506560 1466525 command_runner.go:130] > # This option supports live configuration reload.
	I1225 12:53:29.506564 1466525 command_runner.go:130] > # pause_image_auth_file = ""
	I1225 12:53:29.506570 1466525 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1225 12:53:29.506578 1466525 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1225 12:53:29.506584 1466525 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1225 12:53:29.506593 1466525 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1225 12:53:29.506597 1466525 command_runner.go:130] > # pause_command = "/pause"
	I1225 12:53:29.506605 1466525 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1225 12:53:29.506612 1466525 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1225 12:53:29.506620 1466525 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1225 12:53:29.506626 1466525 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1225 12:53:29.506631 1466525 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1225 12:53:29.506637 1466525 command_runner.go:130] > # signature_policy = ""
	I1225 12:53:29.506643 1466525 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1225 12:53:29.506651 1466525 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1225 12:53:29.506655 1466525 command_runner.go:130] > # changing them here.
	I1225 12:53:29.506660 1466525 command_runner.go:130] > # insecure_registries = [
	I1225 12:53:29.506664 1466525 command_runner.go:130] > # ]
	I1225 12:53:29.506672 1466525 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1225 12:53:29.506677 1466525 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1225 12:53:29.506683 1466525 command_runner.go:130] > # image_volumes = "mkdir"
	I1225 12:53:29.506688 1466525 command_runner.go:130] > # Temporary directory to use for storing big files
	I1225 12:53:29.506695 1466525 command_runner.go:130] > # big_files_temporary_dir = ""
	I1225 12:53:29.506701 1466525 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1225 12:53:29.506705 1466525 command_runner.go:130] > # CNI plugins.
	I1225 12:53:29.506709 1466525 command_runner.go:130] > [crio.network]
	I1225 12:53:29.506715 1466525 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1225 12:53:29.506723 1466525 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1225 12:53:29.506727 1466525 command_runner.go:130] > # cni_default_network = ""
	I1225 12:53:29.506735 1466525 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1225 12:53:29.506740 1466525 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1225 12:53:29.506749 1466525 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1225 12:53:29.506764 1466525 command_runner.go:130] > # plugin_dirs = [
	I1225 12:53:29.506768 1466525 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1225 12:53:29.506775 1466525 command_runner.go:130] > # ]
	I1225 12:53:29.506785 1466525 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1225 12:53:29.506794 1466525 command_runner.go:130] > [crio.metrics]
	I1225 12:53:29.506803 1466525 command_runner.go:130] > # Globally enable or disable metrics support.
	I1225 12:53:29.506812 1466525 command_runner.go:130] > enable_metrics = true
	I1225 12:53:29.506821 1466525 command_runner.go:130] > # Specify enabled metrics collectors.
	I1225 12:53:29.506831 1466525 command_runner.go:130] > # Per default all metrics are enabled.
	I1225 12:53:29.506845 1466525 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1225 12:53:29.506858 1466525 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1225 12:53:29.506870 1466525 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1225 12:53:29.506878 1466525 command_runner.go:130] > # metrics_collectors = [
	I1225 12:53:29.506882 1466525 command_runner.go:130] > # 	"operations",
	I1225 12:53:29.506887 1466525 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1225 12:53:29.506894 1466525 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1225 12:53:29.506899 1466525 command_runner.go:130] > # 	"operations_errors",
	I1225 12:53:29.506905 1466525 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1225 12:53:29.506910 1466525 command_runner.go:130] > # 	"image_pulls_by_name",
	I1225 12:53:29.506917 1466525 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1225 12:53:29.506921 1466525 command_runner.go:130] > # 	"image_pulls_failures",
	I1225 12:53:29.506926 1466525 command_runner.go:130] > # 	"image_pulls_successes",
	I1225 12:53:29.506931 1466525 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1225 12:53:29.506937 1466525 command_runner.go:130] > # 	"image_layer_reuse",
	I1225 12:53:29.506941 1466525 command_runner.go:130] > # 	"containers_oom_total",
	I1225 12:53:29.506945 1466525 command_runner.go:130] > # 	"containers_oom",
	I1225 12:53:29.506952 1466525 command_runner.go:130] > # 	"processes_defunct",
	I1225 12:53:29.506956 1466525 command_runner.go:130] > # 	"operations_total",
	I1225 12:53:29.506960 1466525 command_runner.go:130] > # 	"operations_latency_seconds",
	I1225 12:53:29.506969 1466525 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1225 12:53:29.506979 1466525 command_runner.go:130] > # 	"operations_errors_total",
	I1225 12:53:29.506989 1466525 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1225 12:53:29.507000 1466525 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1225 12:53:29.507010 1466525 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1225 12:53:29.507020 1466525 command_runner.go:130] > # 	"image_pulls_success_total",
	I1225 12:53:29.507027 1466525 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1225 12:53:29.507039 1466525 command_runner.go:130] > # 	"containers_oom_count_total",
	I1225 12:53:29.507046 1466525 command_runner.go:130] > # ]
	I1225 12:53:29.507052 1466525 command_runner.go:130] > # The port on which the metrics server will listen.
	I1225 12:53:29.507058 1466525 command_runner.go:130] > # metrics_port = 9090
	I1225 12:53:29.507063 1466525 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1225 12:53:29.507070 1466525 command_runner.go:130] > # metrics_socket = ""
	I1225 12:53:29.507078 1466525 command_runner.go:130] > # The certificate for the secure metrics server.
	I1225 12:53:29.507087 1466525 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1225 12:53:29.507093 1466525 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1225 12:53:29.507100 1466525 command_runner.go:130] > # certificate on any modification event.
	I1225 12:53:29.507110 1466525 command_runner.go:130] > # metrics_cert = ""
	I1225 12:53:29.507118 1466525 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1225 12:53:29.507123 1466525 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1225 12:53:29.507127 1466525 command_runner.go:130] > # metrics_key = ""
	I1225 12:53:29.507137 1466525 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1225 12:53:29.507141 1466525 command_runner.go:130] > [crio.tracing]
	I1225 12:53:29.507146 1466525 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1225 12:53:29.507153 1466525 command_runner.go:130] > # enable_tracing = false
	I1225 12:53:29.507158 1466525 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1225 12:53:29.507165 1466525 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1225 12:53:29.507171 1466525 command_runner.go:130] > # Number of samples to collect per million spans.
	I1225 12:53:29.507177 1466525 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1225 12:53:29.507184 1466525 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1225 12:53:29.507191 1466525 command_runner.go:130] > [crio.stats]
	I1225 12:53:29.507196 1466525 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1225 12:53:29.507204 1466525 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1225 12:53:29.507208 1466525 command_runner.go:130] > # stats_collection_period = 0
	I1225 12:53:29.507461 1466525 command_runner.go:130] ! time="2023-12-25 12:53:29.490159628Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1225 12:53:29.507489 1466525 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1225 12:53:29.507654 1466525 cni.go:84] Creating CNI manager for ""
	I1225 12:53:29.507671 1466525 cni.go:136] 3 nodes found, recommending kindnet
	I1225 12:53:29.507685 1466525 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 12:53:29.507715 1466525 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-544936 NodeName:multinode-544936-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 12:53:29.507885 1466525 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-544936-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 12:53:29.507969 1466525 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-544936-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 12:53:29.508044 1466525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 12:53:29.517613 1466525 command_runner.go:130] > kubeadm
	I1225 12:53:29.517634 1466525 command_runner.go:130] > kubectl
	I1225 12:53:29.517638 1466525 command_runner.go:130] > kubelet
	I1225 12:53:29.517660 1466525 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 12:53:29.517716 1466525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1225 12:53:29.527394 1466525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1225 12:53:29.544326 1466525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 12:53:29.561754 1466525 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I1225 12:53:29.566381 1466525 command_runner.go:130] > 192.168.39.21	control-plane.minikube.internal
	I1225 12:53:29.566621 1466525 host.go:66] Checking if "multinode-544936" exists ...
	I1225 12:53:29.566877 1466525 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:53:29.567022 1466525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:53:29.567076 1466525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:53:29.582657 1466525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I1225 12:53:29.583124 1466525 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:53:29.583581 1466525 main.go:141] libmachine: Using API Version  1
	I1225 12:53:29.583607 1466525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:53:29.584028 1466525 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:53:29.584232 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:53:29.584405 1466525 start.go:304] JoinCluster: &{Name:multinode-544936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-544936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.54 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:53:29.584521 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1225 12:53:29.584540 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:53:29.587386 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:53:29.587826 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:53:29.587866 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:53:29.587979 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:53:29.588172 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:53:29.588347 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:53:29.588483 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:53:29.797869 1466525 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token fixd5s.e277j8pu9h8v85qo --discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 12:53:29.797938 1466525 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.54 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1225 12:53:29.797988 1466525 host.go:66] Checking if "multinode-544936" exists ...
	I1225 12:53:29.798419 1466525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:53:29.798504 1466525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:53:29.813875 1466525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42433
	I1225 12:53:29.814390 1466525 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:53:29.814858 1466525 main.go:141] libmachine: Using API Version  1
	I1225 12:53:29.814883 1466525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:53:29.815267 1466525 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:53:29.815468 1466525 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:53:29.815659 1466525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-544936-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1225 12:53:29.815678 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:53:29.818618 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:53:29.819043 1466525 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:49:24 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:53:29.819062 1466525 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:53:29.819303 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:53:29.819509 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:53:29.819650 1466525 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:53:29.819796 1466525 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:53:30.019894 1466525 command_runner.go:130] > node/multinode-544936-m03 cordoned
	I1225 12:53:33.064566 1466525 command_runner.go:130] > pod "busybox-5bc68d56bd-c8v59" has DeletionTimestamp older than 1 seconds, skipping
	I1225 12:53:33.064595 1466525 command_runner.go:130] > node/multinode-544936-m03 drained
	I1225 12:53:33.066212 1466525 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1225 12:53:33.066234 1466525 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-7cr8v, kube-system/kube-proxy-gkxgw
	I1225 12:53:33.066264 1466525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-544936-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.250575498s)
	I1225 12:53:33.066290 1466525 node.go:108] successfully drained node "m03"
	I1225 12:53:33.066714 1466525 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:53:33.066972 1466525 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:53:33.067396 1466525 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1225 12:53:33.067454 1466525 round_trippers.go:463] DELETE https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m03
	I1225 12:53:33.067465 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:33.067473 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:33.067479 1466525 round_trippers.go:473]     Content-Type: application/json
	I1225 12:53:33.067487 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:33.084252 1466525 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1225 12:53:33.084282 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:33.084298 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:33.084304 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:33.084309 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:33.084314 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:33.084320 1466525 round_trippers.go:580]     Content-Length: 171
	I1225 12:53:33.084325 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:33 GMT
	I1225 12:53:33.084330 1466525 round_trippers.go:580]     Audit-Id: fc0b4f9c-7111-49f4-8121-9d8e71b1bc66
	I1225 12:53:33.084567 1466525 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-544936-m03","kind":"nodes","uid":"3744762d-9d11-4193-82ab-cd70245fefca"}}
	I1225 12:53:33.084627 1466525 node.go:124] successfully deleted node "m03"
	I1225 12:53:33.084643 1466525 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.54 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1225 12:53:33.084674 1466525 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.54 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1225 12:53:33.084701 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fixd5s.e277j8pu9h8v85qo --discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-544936-m03"
	I1225 12:53:33.141213 1466525 command_runner.go:130] ! W1225 12:53:33.133918    2388 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1225 12:53:33.141514 1466525 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1225 12:53:33.295195 1466525 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1225 12:53:33.295254 1466525 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1225 12:53:34.047758 1466525 command_runner.go:130] > [preflight] Running pre-flight checks
	I1225 12:53:34.047808 1466525 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1225 12:53:34.047823 1466525 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1225 12:53:34.047836 1466525 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 12:53:34.047849 1466525 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 12:53:34.047859 1466525 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1225 12:53:34.047869 1466525 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1225 12:53:34.047882 1466525 command_runner.go:130] > This node has joined the cluster:
	I1225 12:53:34.047894 1466525 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1225 12:53:34.047906 1466525 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1225 12:53:34.047919 1466525 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1225 12:53:34.047955 1466525 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1225 12:53:34.314998 1466525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da minikube.k8s.io/name=multinode-544936 minikube.k8s.io/updated_at=2023_12_25T12_53_34_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 12:53:34.410157 1466525 command_runner.go:130] > node/multinode-544936-m02 labeled
	I1225 12:53:34.422510 1466525 command_runner.go:130] > node/multinode-544936-m03 labeled
	I1225 12:53:34.424322 1466525 start.go:306] JoinCluster complete in 4.839911275s
	I1225 12:53:34.424350 1466525 cni.go:84] Creating CNI manager for ""
	I1225 12:53:34.424356 1466525 cni.go:136] 3 nodes found, recommending kindnet
	I1225 12:53:34.424416 1466525 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1225 12:53:34.429838 1466525 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1225 12:53:34.429872 1466525 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1225 12:53:34.429883 1466525 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1225 12:53:34.429892 1466525 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1225 12:53:34.429903 1466525 command_runner.go:130] > Access: 2023-12-25 12:49:25.300350221 +0000
	I1225 12:53:34.429910 1466525 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I1225 12:53:34.429918 1466525 command_runner.go:130] > Change: 2023-12-25 12:49:23.350350221 +0000
	I1225 12:53:34.429928 1466525 command_runner.go:130] >  Birth: -
	I1225 12:53:34.430340 1466525 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1225 12:53:34.430360 1466525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1225 12:53:34.450502 1466525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1225 12:53:34.817390 1466525 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1225 12:53:34.817427 1466525 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1225 12:53:34.817436 1466525 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1225 12:53:34.817443 1466525 command_runner.go:130] > daemonset.apps/kindnet configured
	I1225 12:53:34.817842 1466525 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:53:34.818115 1466525 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:53:34.818504 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1225 12:53:34.818516 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:34.818524 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:34.818533 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:34.821932 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:53:34.821950 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:34.821957 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:34.821963 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:34.821968 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:34.821973 1466525 round_trippers.go:580]     Content-Length: 291
	I1225 12:53:34.821978 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:34 GMT
	I1225 12:53:34.821983 1466525 round_trippers.go:580]     Audit-Id: e72dd0b5-7a34-4891-a536-7808706588ab
	I1225 12:53:34.821988 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:34.822114 1466525 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1deabb96-9bfd-47c0-8cbc-978c4199f86b","resourceVersion":"883","creationTimestamp":"2023-12-25T12:39:31Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1225 12:53:34.822227 1466525 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-544936" context rescaled to 1 replicas
	I1225 12:53:34.822261 1466525 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.54 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1225 12:53:34.824077 1466525 out.go:177] * Verifying Kubernetes components...
	I1225 12:53:34.825255 1466525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:53:34.839951 1466525 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:53:34.840311 1466525 kapi.go:59] client config for multinode-544936: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.crt", KeyFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/multinode-544936/client.key", CAFile:"/home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1225 12:53:34.840640 1466525 node_ready.go:35] waiting up to 6m0s for node "multinode-544936-m03" to be "Ready" ...
	I1225 12:53:34.840740 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m03
	I1225 12:53:34.840752 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:34.840764 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:34.840780 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:34.843192 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:53:34.843221 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:34.843231 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:34.843238 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:34.843245 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:34.843257 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:34 GMT
	I1225 12:53:34.843265 1466525 round_trippers.go:580]     Audit-Id: ac2bf572-3a5b-4c44-9f9b-912521008327
	I1225 12:53:34.843280 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:34.844093 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m03","uid":"be4bf590-c76b-44e5-bb48-3057ad728689","resourceVersion":"1209","creationTimestamp":"2023-12-25T12:53:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_53_34_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1225 12:53:34.844469 1466525 node_ready.go:49] node "multinode-544936-m03" has status "Ready":"True"
	I1225 12:53:34.844491 1466525 node_ready.go:38] duration metric: took 3.830159ms waiting for node "multinode-544936-m03" to be "Ready" ...
	I1225 12:53:34.844503 1466525 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:53:34.844584 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods
	I1225 12:53:34.844596 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:34.844607 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:34.844622 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:34.852236 1466525 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1225 12:53:34.852257 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:34.852267 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:34.852275 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:34.852282 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:34.852290 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:34 GMT
	I1225 12:53:34.852298 1466525 round_trippers.go:580]     Audit-Id: 22e87292-26a8-4770-a8e0-6508567b0b61
	I1225 12:53:34.852311 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:34.853965 1466525 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1215"},"items":[{"metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"864","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82039 chars]
	I1225 12:53:34.856589 1466525 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:34.856683 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mg2zk
	I1225 12:53:34.856691 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:34.856700 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:34.856708 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:34.859097 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:53:34.859116 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:34.859123 1466525 round_trippers.go:580]     Audit-Id: 5bd4d7c9-5e5a-42d3-8816-4630ec54b250
	I1225 12:53:34.859130 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:34.859135 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:34.859141 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:34.859146 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:34.859152 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:34 GMT
	I1225 12:53:34.859375 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-mg2zk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4f4e21f4-8e73-4b81-a080-c42b6980ee3b","resourceVersion":"864","creationTimestamp":"2023-12-25T12:39:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7dc0088e-bb8c-48d0-bb53-53495f263a29","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dc0088e-bb8c-48d0-bb53-53495f263a29\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1225 12:53:34.859874 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:53:34.859889 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:34.859896 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:34.859902 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:34.861883 1466525 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1225 12:53:34.861911 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:34.861918 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:34.861923 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:34.861929 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:34.861933 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:34 GMT
	I1225 12:53:34.861939 1466525 round_trippers.go:580]     Audit-Id: 2b9aa002-7b64-44cf-bde6-5aa5ce504fd1
	I1225 12:53:34.861943 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:34.862301 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1225 12:53:34.862725 1466525 pod_ready.go:92] pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace has status "Ready":"True"
	I1225 12:53:34.862750 1466525 pod_ready.go:81] duration metric: took 6.134825ms waiting for pod "coredns-5dd5756b68-mg2zk" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:34.862760 1466525 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:34.862833 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-544936
	I1225 12:53:34.862840 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:34.862848 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:34.862854 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:34.865274 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:53:34.865298 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:34.865308 1466525 round_trippers.go:580]     Audit-Id: 10d65bc8-d66b-450a-a9f8-404e64b7542d
	I1225 12:53:34.865317 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:34.865324 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:34.865332 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:34.865340 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:34.865348 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:34 GMT
	I1225 12:53:34.865549 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-544936","namespace":"kube-system","uid":"8dc9103e-ec1a-40f4-80f8-4f4918bb5e33","resourceVersion":"884","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.21:2379","kubernetes.io/config.hash":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.mirror":"73040190d29da5f0e049ff80afdcbb96","kubernetes.io/config.seen":"2023-12-25T12:39:31.216603978Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1225 12:53:34.865929 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:53:34.865941 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:34.865948 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:34.865954 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:34.867928 1466525 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1225 12:53:34.867945 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:34.867952 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:34 GMT
	I1225 12:53:34.867959 1466525 round_trippers.go:580]     Audit-Id: 9d277c96-d78d-4cfb-9aac-7cfc22a38fb3
	I1225 12:53:34.867967 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:34.867974 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:34.867981 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:34.867990 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:34.868101 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1225 12:53:34.868489 1466525 pod_ready.go:92] pod "etcd-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:53:34.868507 1466525 pod_ready.go:81] duration metric: took 5.737571ms waiting for pod "etcd-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:34.868529 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:34.868613 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-544936
	I1225 12:53:34.868623 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:34.868637 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:34.868650 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:34.870719 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:53:34.870735 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:34.870741 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:34.870747 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:34.870753 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:34.870762 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:34.870770 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:34 GMT
	I1225 12:53:34.870779 1466525 round_trippers.go:580]     Audit-Id: bdb205f1-0a8e-4cf8-a2f1-589117e19b7b
	I1225 12:53:34.870922 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-544936","namespace":"kube-system","uid":"d0fda9c8-27cf-4ecc-b379-39745cb7ec19","resourceVersion":"874","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.21:8443","kubernetes.io/config.hash":"b7cd9addac4657510db86c61386c4e6f","kubernetes.io/config.mirror":"b7cd9addac4657510db86c61386c4e6f","kubernetes.io/config.seen":"2023-12-25T12:39:31.216607492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1225 12:53:34.871288 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:53:34.871298 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:34.871305 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:34.871311 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:34.873250 1466525 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1225 12:53:34.873270 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:34.873282 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:34.873291 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:34.873299 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:34.873310 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:34.873322 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:34 GMT
	I1225 12:53:34.873333 1466525 round_trippers.go:580]     Audit-Id: a817b81a-5225-47e7-8272-faccb959b42c
	I1225 12:53:34.873701 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1225 12:53:34.873979 1466525 pod_ready.go:92] pod "kube-apiserver-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:53:34.873992 1466525 pod_ready.go:81] duration metric: took 5.452418ms waiting for pod "kube-apiserver-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:34.874001 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:34.874052 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-544936
	I1225 12:53:34.874060 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:34.874067 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:34.874073 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:34.875902 1466525 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1225 12:53:34.875922 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:34.875930 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:34.875938 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:34.875945 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:34.875954 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:34 GMT
	I1225 12:53:34.875961 1466525 round_trippers.go:580]     Audit-Id: 632034cd-c6ad-4947-befa-c1d6047dab15
	I1225 12:53:34.875974 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:34.876226 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-544936","namespace":"kube-system","uid":"e8837ba4-e0a0-4bec-a702-df5e7e9ce1c0","resourceVersion":"858","creationTimestamp":"2023-12-25T12:39:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dcbd1114ea0bb0064cc87c1b2d706f29","kubernetes.io/config.mirror":"dcbd1114ea0bb0064cc87c1b2d706f29","kubernetes.io/config.seen":"2023-12-25T12:39:31.216608577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1225 12:53:34.876688 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:53:34.876703 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:34.876711 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:34.876716 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:34.878595 1466525 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1225 12:53:34.878609 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:34.878616 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:34.878621 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:34.878626 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:34 GMT
	I1225 12:53:34.878631 1466525 round_trippers.go:580]     Audit-Id: b8c363d6-d2b0-4e9f-af47-f8ea4003a2de
	I1225 12:53:34.878638 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:34.878643 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:34.878802 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1225 12:53:34.879065 1466525 pod_ready.go:92] pod "kube-controller-manager-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:53:34.879078 1466525 pod_ready.go:81] duration metric: took 5.071101ms waiting for pod "kube-controller-manager-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:34.879088 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7z5x6" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:35.041554 1466525 request.go:629] Waited for 162.381245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z5x6
	I1225 12:53:35.041617 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z5x6
	I1225 12:53:35.041622 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:35.041631 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:35.041637 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:35.045922 1466525 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1225 12:53:35.045945 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:35.045955 1466525 round_trippers.go:580]     Audit-Id: 8f886953-0537-48c4-8ee5-d946ac38d055
	I1225 12:53:35.045961 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:35.045968 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:35.045973 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:35.045978 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:35.045984 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:35 GMT
	I1225 12:53:35.054232 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7z5x6","generateName":"kube-proxy-","namespace":"kube-system","uid":"304c848e-4ecf-433d-a17d-b1b33784ae08","resourceVersion":"1046","creationTimestamp":"2023-12-25T12:40:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:40:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1225 12:53:35.241278 1466525 request.go:629] Waited for 186.398694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:53:35.241363 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m02
	I1225 12:53:35.241369 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:35.241382 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:35.241396 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:35.244398 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:53:35.244428 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:35.244438 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:35.244446 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:35.244453 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:35.244462 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:35.244468 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:35 GMT
	I1225 12:53:35.244475 1466525 round_trippers.go:580]     Audit-Id: dddb14c8-08c0-494b-a7b0-52b2e98ef343
	I1225 12:53:35.244672 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m02","uid":"b32a0af7-ee24-4bb7-b481-19b822376a8d","resourceVersion":"1208","creationTimestamp":"2023-12-25T12:51:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_53_34_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:51:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1225 12:53:35.245088 1466525 pod_ready.go:92] pod "kube-proxy-7z5x6" in "kube-system" namespace has status "Ready":"True"
	I1225 12:53:35.245111 1466525 pod_ready.go:81] duration metric: took 366.016243ms waiting for pod "kube-proxy-7z5x6" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:35.245125 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gkxgw" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:35.441160 1466525 request.go:629] Waited for 195.953283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkxgw
	I1225 12:53:35.441268 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkxgw
	I1225 12:53:35.441280 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:35.441293 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:35.441306 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:35.447071 1466525 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1225 12:53:35.447104 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:35.447116 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:35.447125 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:35.447133 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:35 GMT
	I1225 12:53:35.447141 1466525 round_trippers.go:580]     Audit-Id: 2d31c18b-f0b1-41f9-8b78-c007161fc95e
	I1225 12:53:35.447151 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:35.447159 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:35.447371 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gkxgw","generateName":"kube-proxy-","namespace":"kube-system","uid":"d14fbb1d-1200-463f-bd2b-17943371448c","resourceVersion":"1214","creationTimestamp":"2023-12-25T12:41:20Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I1225 12:53:35.641357 1466525 request.go:629] Waited for 193.442972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m03
	I1225 12:53:35.641444 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m03
	I1225 12:53:35.641449 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:35.641457 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:35.641463 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:35.644066 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:53:35.644094 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:35.644102 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:35 GMT
	I1225 12:53:35.644107 1466525 round_trippers.go:580]     Audit-Id: 5e57867f-56e0-4481-b2d7-bc6d827aecdf
	I1225 12:53:35.644113 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:35.644118 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:35.644123 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:35.644128 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:35.644296 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m03","uid":"be4bf590-c76b-44e5-bb48-3057ad728689","resourceVersion":"1209","creationTimestamp":"2023-12-25T12:53:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_53_34_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1225 12:53:35.840794 1466525 request.go:629] Waited for 95.204022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkxgw
	I1225 12:53:35.840858 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkxgw
	I1225 12:53:35.840863 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:35.840872 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:35.840879 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:35.844726 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:53:35.844757 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:35.844781 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:35.844790 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:35.844801 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:35 GMT
	I1225 12:53:35.844813 1466525 round_trippers.go:580]     Audit-Id: 5d27a56e-68b6-4d63-b0bc-c26e9ae11794
	I1225 12:53:35.844825 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:35.844834 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:35.844956 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gkxgw","generateName":"kube-proxy-","namespace":"kube-system","uid":"d14fbb1d-1200-463f-bd2b-17943371448c","resourceVersion":"1225","creationTimestamp":"2023-12-25T12:41:20Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1225 12:53:36.041768 1466525 request.go:629] Waited for 196.262242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m03
	I1225 12:53:36.041841 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936-m03
	I1225 12:53:36.041846 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:36.041855 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:36.041861 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:36.044842 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:53:36.044868 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:36.044875 1466525 round_trippers.go:580]     Audit-Id: 6ea4d9b9-c729-4e0d-86ef-fb85fbf6eba0
	I1225 12:53:36.044881 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:36.044886 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:36.044891 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:36.044897 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:36.044902 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:36 GMT
	I1225 12:53:36.045194 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936-m03","uid":"be4bf590-c76b-44e5-bb48-3057ad728689","resourceVersion":"1209","creationTimestamp":"2023-12-25T12:53:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_25T12_53_34_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1225 12:53:36.045586 1466525 pod_ready.go:92] pod "kube-proxy-gkxgw" in "kube-system" namespace has status "Ready":"True"
	I1225 12:53:36.045606 1466525 pod_ready.go:81] duration metric: took 800.473613ms waiting for pod "kube-proxy-gkxgw" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:36.045617 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4jc7" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:36.241033 1466525 request.go:629] Waited for 195.318499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4jc7
	I1225 12:53:36.241118 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4jc7
	I1225 12:53:36.241124 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:36.241132 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:36.241141 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:36.244208 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:53:36.244232 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:36.244240 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:36.244245 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:36 GMT
	I1225 12:53:36.244250 1466525 round_trippers.go:580]     Audit-Id: 8765e5c8-a4d8-4614-b325-960152d2112a
	I1225 12:53:36.244256 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:36.244261 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:36.244266 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:36.244510 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k4jc7","generateName":"kube-proxy-","namespace":"kube-system","uid":"14699a0d-601b-4bc3-9584-7ac67822a926","resourceVersion":"790","creationTimestamp":"2023-12-25T12:39:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ba4168f5-7b22-4fd4-84d1-94e16f5645a7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba4168f5-7b22-4fd4-84d1-94e16f5645a7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1225 12:53:36.441497 1466525 request.go:629] Waited for 196.455463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:53:36.441577 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:53:36.441585 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:36.441597 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:36.441612 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:36.445122 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:53:36.445156 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:36.445164 1466525 round_trippers.go:580]     Audit-Id: 6b447c44-c4e7-4155-bb00-0dadb9c87c92
	I1225 12:53:36.445170 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:36.445175 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:36.445184 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:36.445195 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:36.445202 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:36 GMT
	I1225 12:53:36.445318 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1225 12:53:36.445689 1466525 pod_ready.go:92] pod "kube-proxy-k4jc7" in "kube-system" namespace has status "Ready":"True"
	I1225 12:53:36.445706 1466525 pod_ready.go:81] duration metric: took 400.082906ms waiting for pod "kube-proxy-k4jc7" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:36.445716 1466525 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:36.641766 1466525 request.go:629] Waited for 195.939197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-544936
	I1225 12:53:36.641854 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-544936
	I1225 12:53:36.641867 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:36.641880 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:36.641895 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:36.644971 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:53:36.645002 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:36.645013 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:36 GMT
	I1225 12:53:36.645021 1466525 round_trippers.go:580]     Audit-Id: 70a44891-9199-4217-9ee8-15d9e7400c20
	I1225 12:53:36.645030 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:36.645038 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:36.645046 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:36.645053 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:36.645187 1466525 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-544936","namespace":"kube-system","uid":"e8027489-26d3-44c3-aeea-286e6689e75e","resourceVersion":"876","creationTimestamp":"2023-12-25T12:39:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0d8721061e771e9dc39fa5394fc12b4b","kubernetes.io/config.mirror":"0d8721061e771e9dc39fa5394fc12b4b","kubernetes.io/config.seen":"2023-12-25T12:39:22.819404471Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-25T12:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1225 12:53:36.840928 1466525 request.go:629] Waited for 195.255202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:53:36.841007 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes/multinode-544936
	I1225 12:53:36.841012 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:36.841020 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:36.841027 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:36.843886 1466525 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1225 12:53:36.843907 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:36.843915 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:36.843920 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:36.843926 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:36.843931 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:36 GMT
	I1225 12:53:36.843940 1466525 round_trippers.go:580]     Audit-Id: 91c13163-867d-45b6-8dc3-9c3dc7e349d5
	I1225 12:53:36.843948 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:36.844159 1466525 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-25T12:39:27Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1225 12:53:36.844617 1466525 pod_ready.go:92] pod "kube-scheduler-multinode-544936" in "kube-system" namespace has status "Ready":"True"
	I1225 12:53:36.844638 1466525 pod_ready.go:81] duration metric: took 398.914231ms waiting for pod "kube-scheduler-multinode-544936" in "kube-system" namespace to be "Ready" ...
	I1225 12:53:36.844653 1466525 pod_ready.go:38] duration metric: took 2.000133557s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 12:53:36.844675 1466525 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 12:53:36.844736 1466525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:53:36.859324 1466525 system_svc.go:56] duration metric: took 14.641359ms WaitForService to wait for kubelet.
	I1225 12:53:36.859359 1466525 kubeadm.go:581] duration metric: took 2.03706498s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 12:53:36.859387 1466525 node_conditions.go:102] verifying NodePressure condition ...
	I1225 12:53:37.041815 1466525 request.go:629] Waited for 182.341858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.21:8443/api/v1/nodes
	I1225 12:53:37.041884 1466525 round_trippers.go:463] GET https://192.168.39.21:8443/api/v1/nodes
	I1225 12:53:37.041890 1466525 round_trippers.go:469] Request Headers:
	I1225 12:53:37.041899 1466525 round_trippers.go:473]     Accept: application/json, */*
	I1225 12:53:37.041907 1466525 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1225 12:53:37.045595 1466525 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1225 12:53:37.045626 1466525 round_trippers.go:577] Response Headers:
	I1225 12:53:37.045634 1466525 round_trippers.go:580]     Date: Mon, 25 Dec 2023 12:53:37 GMT
	I1225 12:53:37.045640 1466525 round_trippers.go:580]     Audit-Id: 13a8476e-8c45-4dd6-9877-6f70a7abe2ca
	I1225 12:53:37.045646 1466525 round_trippers.go:580]     Cache-Control: no-cache, private
	I1225 12:53:37.045651 1466525 round_trippers.go:580]     Content-Type: application/json
	I1225 12:53:37.045656 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e7f4fa98-8a85-4316-bcca-da97616e67b8
	I1225 12:53:37.045662 1466525 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a2433a00-d736-42de-bcc9-0a0e4464d1ac
	I1225 12:53:37.046458 1466525 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1230"},"items":[{"metadata":{"name":"multinode-544936","uid":"7b508dbe-08e0-493c-bb18-8a60336e05f8","resourceVersion":"893","creationTimestamp":"2023-12-25T12:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-544936","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f8b637745f32b0b89b0ea392bb3c31ae7b3b68da","minikube.k8s.io/name":"multinode-544936","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_25T12_39_32_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16237 chars]
	I1225 12:53:37.047082 1466525 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:53:37.047171 1466525 node_conditions.go:123] node cpu capacity is 2
	I1225 12:53:37.047188 1466525 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:53:37.047197 1466525 node_conditions.go:123] node cpu capacity is 2
	I1225 12:53:37.047204 1466525 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 12:53:37.047210 1466525 node_conditions.go:123] node cpu capacity is 2
	I1225 12:53:37.047217 1466525 node_conditions.go:105] duration metric: took 187.824415ms to run NodePressure ...
	I1225 12:53:37.047233 1466525 start.go:228] waiting for startup goroutines ...
	I1225 12:53:37.047260 1466525 start.go:242] writing updated cluster config ...
	I1225 12:53:37.047637 1466525 ssh_runner.go:195] Run: rm -f paused
	I1225 12:53:37.106961 1466525 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 12:53:37.110135 1466525 out.go:177] * Done! kubectl is now configured to use "multinode-544936" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2023-12-25 12:49:24 UTC, ends at Mon 2023-12-25 12:53:38 UTC. --
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.291775312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703508818291762476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=2f70f9cb-cc61-43c6-a2d5-5b0c4afc3727 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.292929151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4490523c-85f0-41f8-a1b9-3415107fc50c name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.293032944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4490523c-85f0-41f8-a1b9-3415107fc50c name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.293473685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17f8f2556105712947a7c3ec92fe61b0ed09550133ea4d2aab35b0c309883647,PodSandboxId:0f9035fdbca1dd5f9b29af40251b724dc6cc742eef14f68dc8d5b1d2fac0d7e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703508631410075418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897346ba-f39d-4771-913e-535bff9ca6b7,},Annotations:map[string]string{io.kubernetes.container.hash: 721f4eb5,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd4839dc9d8b79604c1445c2be2f56b1e9fc4c2555daeaac0324ee17927dda2,PodSandboxId:03f5b4d94ea95b413dcd789218ec043b75d65287530e6868f6454dab41fed3e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1703508608935833218,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-qn48b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91cf6ac2-2bc3-4049-aaed-7863759e58da,},Annotations:map[string]string{io.kubernetes.container.hash: 2c50b09b,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a54c26f9d4d795aaa8c182e9113c7b657877e25e1e6115658ab80629d8a520e3,PodSandboxId:d0809f35b098a399af16ddca68e4d910285da143ff24ee35291d801bb8092929,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703508607767636484,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mg2zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4e21f4-8e73-4b81-a080-c42b6980ee3b,},Annotations:map[string]string{io.kubernetes.container.hash: dc0843c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575bdfec4d48a68612cad6b75bb20f9bb36c58739a4ae160976a2eff7714ef15,PodSandboxId:56ed8378f45f8e4d72e567734f8c9ef477f8c5fd214d9cb4a10cfab9c0bcb25b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1703508603120644857,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2hjhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8cfe7daa-3fc7-485a-8794-117466297c5a,},Annotations:map[string]string{io.kubernetes.container.hash: 44ff0fe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4f9e3bb9920be2fead0a328c42865d7676d1acc0d84b727300626005938999,PodSandboxId:0f9035fdbca1dd5f9b29af40251b724dc6cc742eef14f68dc8d5b1d2fac0d7e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703508600271947221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 897346ba-f39d-4771-913e-535bff9ca6b7,},Annotations:map[string]string{io.kubernetes.container.hash: 721f4eb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069ff4f53689aaba298ff7826edc71bb16d092c663bcec90ebe3c67ec4affe94,PodSandboxId:a8807c049e40d89375af9c5a3a906ef8936f02270214aec1a62d62f4cb214bb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703508600173983137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4jc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14699a0d-601b-4bc3-9584-7ac67822
a926,},Annotations:map[string]string{io.kubernetes.container.hash: c415925e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb7709b44be66229292dd2c63b6d7a3603e5ff9803db26441ccd8eac757ae4d5,PodSandboxId:670fb82b92294bd6cd275fa15f54071a227cd4925e83f0f8378f0bcf01e53d3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703508593857094120,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8721061e771e9dc39fa5394fc12b4b,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430830dd54a388cc8c6ba6e63b86d0ae0046bacf9029c8b33718c76f21b974bc,PodSandboxId:7853068eae7cd7ef0cb6239026d219ab6cd1784bcef6939db414505a80862eb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703508593530925520,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73040190d29da5f0e049ff80afdcbb96,},Annotations:map[string]string{io.kubernetes.container.has
h: 7e47c687,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93ece41c3a6f26870f30e1e9dc0f4d350bfa308a0903b903ec7a0654f1727d0,PodSandboxId:c49e623262974d238c5f1642cba4687f0c4e0a9ee1796da0c6d98f83cfe9bb60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703508593217070364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7cd9addac4657510db86c61386c4e6f,},Annotations:map[string]string{io.kubernetes.container.hash: b4e5abb2,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007431211d8d9ef136ae41d34947868f92335766e50519588412580572dd4716,PodSandboxId:370515b9cddd860e98ac8638a332eaebe47104444c22f8fae3227cb08694dfbf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703508593150703020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbd1114ea0bb0064cc87c1b2d706f29,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4490523c-85f0-41f8-a1b9-3415107fc50c name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.346555233Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c9d75ce5-cc88-441a-b944-ce3b1609aa95 name=/runtime.v1.RuntimeService/Version
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.346622765Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c9d75ce5-cc88-441a-b944-ce3b1609aa95 name=/runtime.v1.RuntimeService/Version
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.348119190Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0a8d371c-06eb-40ca-989b-c9c118fe0616 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.348626893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703508818348609311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0a8d371c-06eb-40ca-989b-c9c118fe0616 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.349786677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e236c31a-60be-4c6b-84ba-0ed98e1cb6b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.349866928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e236c31a-60be-4c6b-84ba-0ed98e1cb6b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.350110597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17f8f2556105712947a7c3ec92fe61b0ed09550133ea4d2aab35b0c309883647,PodSandboxId:0f9035fdbca1dd5f9b29af40251b724dc6cc742eef14f68dc8d5b1d2fac0d7e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703508631410075418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897346ba-f39d-4771-913e-535bff9ca6b7,},Annotations:map[string]string{io.kubernetes.container.hash: 721f4eb5,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd4839dc9d8b79604c1445c2be2f56b1e9fc4c2555daeaac0324ee17927dda2,PodSandboxId:03f5b4d94ea95b413dcd789218ec043b75d65287530e6868f6454dab41fed3e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1703508608935833218,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-qn48b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91cf6ac2-2bc3-4049-aaed-7863759e58da,},Annotations:map[string]string{io.kubernetes.container.hash: 2c50b09b,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a54c26f9d4d795aaa8c182e9113c7b657877e25e1e6115658ab80629d8a520e3,PodSandboxId:d0809f35b098a399af16ddca68e4d910285da143ff24ee35291d801bb8092929,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703508607767636484,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mg2zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4e21f4-8e73-4b81-a080-c42b6980ee3b,},Annotations:map[string]string{io.kubernetes.container.hash: dc0843c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575bdfec4d48a68612cad6b75bb20f9bb36c58739a4ae160976a2eff7714ef15,PodSandboxId:56ed8378f45f8e4d72e567734f8c9ef477f8c5fd214d9cb4a10cfab9c0bcb25b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1703508603120644857,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2hjhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8cfe7daa-3fc7-485a-8794-117466297c5a,},Annotations:map[string]string{io.kubernetes.container.hash: 44ff0fe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4f9e3bb9920be2fead0a328c42865d7676d1acc0d84b727300626005938999,PodSandboxId:0f9035fdbca1dd5f9b29af40251b724dc6cc742eef14f68dc8d5b1d2fac0d7e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703508600271947221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 897346ba-f39d-4771-913e-535bff9ca6b7,},Annotations:map[string]string{io.kubernetes.container.hash: 721f4eb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069ff4f53689aaba298ff7826edc71bb16d092c663bcec90ebe3c67ec4affe94,PodSandboxId:a8807c049e40d89375af9c5a3a906ef8936f02270214aec1a62d62f4cb214bb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703508600173983137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4jc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14699a0d-601b-4bc3-9584-7ac67822
a926,},Annotations:map[string]string{io.kubernetes.container.hash: c415925e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb7709b44be66229292dd2c63b6d7a3603e5ff9803db26441ccd8eac757ae4d5,PodSandboxId:670fb82b92294bd6cd275fa15f54071a227cd4925e83f0f8378f0bcf01e53d3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703508593857094120,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8721061e771e9dc39fa5394fc12b4b,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430830dd54a388cc8c6ba6e63b86d0ae0046bacf9029c8b33718c76f21b974bc,PodSandboxId:7853068eae7cd7ef0cb6239026d219ab6cd1784bcef6939db414505a80862eb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703508593530925520,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73040190d29da5f0e049ff80afdcbb96,},Annotations:map[string]string{io.kubernetes.container.has
h: 7e47c687,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93ece41c3a6f26870f30e1e9dc0f4d350bfa308a0903b903ec7a0654f1727d0,PodSandboxId:c49e623262974d238c5f1642cba4687f0c4e0a9ee1796da0c6d98f83cfe9bb60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703508593217070364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7cd9addac4657510db86c61386c4e6f,},Annotations:map[string]string{io.kubernetes.container.hash: b4e5abb2,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007431211d8d9ef136ae41d34947868f92335766e50519588412580572dd4716,PodSandboxId:370515b9cddd860e98ac8638a332eaebe47104444c22f8fae3227cb08694dfbf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703508593150703020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbd1114ea0bb0064cc87c1b2d706f29,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e236c31a-60be-4c6b-84ba-0ed98e1cb6b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.405787343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3bf43607-589a-4aa1-942c-f41a0ec00285 name=/runtime.v1.RuntimeService/Version
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.405901564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3bf43607-589a-4aa1-942c-f41a0ec00285 name=/runtime.v1.RuntimeService/Version
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.407601107Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6cac6536-de12-4ad4-a6a7-10cb275d3034 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.408210330Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703508818408189235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6cac6536-de12-4ad4-a6a7-10cb275d3034 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.410092831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b7c082f9-6bec-4d5e-a959-a19d1efd3bb7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.410248441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b7c082f9-6bec-4d5e-a959-a19d1efd3bb7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.410607678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17f8f2556105712947a7c3ec92fe61b0ed09550133ea4d2aab35b0c309883647,PodSandboxId:0f9035fdbca1dd5f9b29af40251b724dc6cc742eef14f68dc8d5b1d2fac0d7e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703508631410075418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897346ba-f39d-4771-913e-535bff9ca6b7,},Annotations:map[string]string{io.kubernetes.container.hash: 721f4eb5,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd4839dc9d8b79604c1445c2be2f56b1e9fc4c2555daeaac0324ee17927dda2,PodSandboxId:03f5b4d94ea95b413dcd789218ec043b75d65287530e6868f6454dab41fed3e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1703508608935833218,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-qn48b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91cf6ac2-2bc3-4049-aaed-7863759e58da,},Annotations:map[string]string{io.kubernetes.container.hash: 2c50b09b,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a54c26f9d4d795aaa8c182e9113c7b657877e25e1e6115658ab80629d8a520e3,PodSandboxId:d0809f35b098a399af16ddca68e4d910285da143ff24ee35291d801bb8092929,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703508607767636484,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mg2zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4e21f4-8e73-4b81-a080-c42b6980ee3b,},Annotations:map[string]string{io.kubernetes.container.hash: dc0843c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575bdfec4d48a68612cad6b75bb20f9bb36c58739a4ae160976a2eff7714ef15,PodSandboxId:56ed8378f45f8e4d72e567734f8c9ef477f8c5fd214d9cb4a10cfab9c0bcb25b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1703508603120644857,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2hjhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8cfe7daa-3fc7-485a-8794-117466297c5a,},Annotations:map[string]string{io.kubernetes.container.hash: 44ff0fe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4f9e3bb9920be2fead0a328c42865d7676d1acc0d84b727300626005938999,PodSandboxId:0f9035fdbca1dd5f9b29af40251b724dc6cc742eef14f68dc8d5b1d2fac0d7e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703508600271947221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 897346ba-f39d-4771-913e-535bff9ca6b7,},Annotations:map[string]string{io.kubernetes.container.hash: 721f4eb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069ff4f53689aaba298ff7826edc71bb16d092c663bcec90ebe3c67ec4affe94,PodSandboxId:a8807c049e40d89375af9c5a3a906ef8936f02270214aec1a62d62f4cb214bb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703508600173983137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4jc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14699a0d-601b-4bc3-9584-7ac67822
a926,},Annotations:map[string]string{io.kubernetes.container.hash: c415925e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb7709b44be66229292dd2c63b6d7a3603e5ff9803db26441ccd8eac757ae4d5,PodSandboxId:670fb82b92294bd6cd275fa15f54071a227cd4925e83f0f8378f0bcf01e53d3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703508593857094120,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8721061e771e9dc39fa5394fc12b4b,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430830dd54a388cc8c6ba6e63b86d0ae0046bacf9029c8b33718c76f21b974bc,PodSandboxId:7853068eae7cd7ef0cb6239026d219ab6cd1784bcef6939db414505a80862eb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703508593530925520,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73040190d29da5f0e049ff80afdcbb96,},Annotations:map[string]string{io.kubernetes.container.has
h: 7e47c687,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93ece41c3a6f26870f30e1e9dc0f4d350bfa308a0903b903ec7a0654f1727d0,PodSandboxId:c49e623262974d238c5f1642cba4687f0c4e0a9ee1796da0c6d98f83cfe9bb60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703508593217070364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7cd9addac4657510db86c61386c4e6f,},Annotations:map[string]string{io.kubernetes.container.hash: b4e5abb2,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007431211d8d9ef136ae41d34947868f92335766e50519588412580572dd4716,PodSandboxId:370515b9cddd860e98ac8638a332eaebe47104444c22f8fae3227cb08694dfbf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703508593150703020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbd1114ea0bb0064cc87c1b2d706f29,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b7c082f9-6bec-4d5e-a959-a19d1efd3bb7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.457164545Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4638ccf0-8ab7-4a18-b966-e33b74e0eb3b name=/runtime.v1.RuntimeService/Version
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.457274061Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4638ccf0-8ab7-4a18-b966-e33b74e0eb3b name=/runtime.v1.RuntimeService/Version
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.459892301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=29f5fb92-cd3b-4568-89d2-eabe09a75e66 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.460565345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703508818460544905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=29f5fb92-cd3b-4568-89d2-eabe09a75e66 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.461683133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=58e7d90e-2b22-4c9a-8177-33c7965760ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.461775691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=58e7d90e-2b22-4c9a-8177-33c7965760ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 12:53:38 multinode-544936 crio[714]: time="2023-12-25 12:53:38.462067179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17f8f2556105712947a7c3ec92fe61b0ed09550133ea4d2aab35b0c309883647,PodSandboxId:0f9035fdbca1dd5f9b29af40251b724dc6cc742eef14f68dc8d5b1d2fac0d7e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703508631410075418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897346ba-f39d-4771-913e-535bff9ca6b7,},Annotations:map[string]string{io.kubernetes.container.hash: 721f4eb5,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd4839dc9d8b79604c1445c2be2f56b1e9fc4c2555daeaac0324ee17927dda2,PodSandboxId:03f5b4d94ea95b413dcd789218ec043b75d65287530e6868f6454dab41fed3e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1703508608935833218,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-qn48b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91cf6ac2-2bc3-4049-aaed-7863759e58da,},Annotations:map[string]string{io.kubernetes.container.hash: 2c50b09b,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a54c26f9d4d795aaa8c182e9113c7b657877e25e1e6115658ab80629d8a520e3,PodSandboxId:d0809f35b098a399af16ddca68e4d910285da143ff24ee35291d801bb8092929,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703508607767636484,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mg2zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4e21f4-8e73-4b81-a080-c42b6980ee3b,},Annotations:map[string]string{io.kubernetes.container.hash: dc0843c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575bdfec4d48a68612cad6b75bb20f9bb36c58739a4ae160976a2eff7714ef15,PodSandboxId:56ed8378f45f8e4d72e567734f8c9ef477f8c5fd214d9cb4a10cfab9c0bcb25b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1703508603120644857,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2hjhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8cfe7daa-3fc7-485a-8794-117466297c5a,},Annotations:map[string]string{io.kubernetes.container.hash: 44ff0fe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4f9e3bb9920be2fead0a328c42865d7676d1acc0d84b727300626005938999,PodSandboxId:0f9035fdbca1dd5f9b29af40251b724dc6cc742eef14f68dc8d5b1d2fac0d7e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703508600271947221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 897346ba-f39d-4771-913e-535bff9ca6b7,},Annotations:map[string]string{io.kubernetes.container.hash: 721f4eb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069ff4f53689aaba298ff7826edc71bb16d092c663bcec90ebe3c67ec4affe94,PodSandboxId:a8807c049e40d89375af9c5a3a906ef8936f02270214aec1a62d62f4cb214bb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703508600173983137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4jc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14699a0d-601b-4bc3-9584-7ac67822
a926,},Annotations:map[string]string{io.kubernetes.container.hash: c415925e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb7709b44be66229292dd2c63b6d7a3603e5ff9803db26441ccd8eac757ae4d5,PodSandboxId:670fb82b92294bd6cd275fa15f54071a227cd4925e83f0f8378f0bcf01e53d3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703508593857094120,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8721061e771e9dc39fa5394fc12b4b,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430830dd54a388cc8c6ba6e63b86d0ae0046bacf9029c8b33718c76f21b974bc,PodSandboxId:7853068eae7cd7ef0cb6239026d219ab6cd1784bcef6939db414505a80862eb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703508593530925520,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73040190d29da5f0e049ff80afdcbb96,},Annotations:map[string]string{io.kubernetes.container.has
h: 7e47c687,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93ece41c3a6f26870f30e1e9dc0f4d350bfa308a0903b903ec7a0654f1727d0,PodSandboxId:c49e623262974d238c5f1642cba4687f0c4e0a9ee1796da0c6d98f83cfe9bb60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703508593217070364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7cd9addac4657510db86c61386c4e6f,},Annotations:map[string]string{io.kubernetes.container.hash: b4e5abb2,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007431211d8d9ef136ae41d34947868f92335766e50519588412580572dd4716,PodSandboxId:370515b9cddd860e98ac8638a332eaebe47104444c22f8fae3227cb08694dfbf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703508593150703020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-544936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbd1114ea0bb0064cc87c1b2d706f29,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=58e7d90e-2b22-4c9a-8177-33c7965760ec name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17f8f25561057       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   0f9035fdbca1d       storage-provisioner
	abd4839dc9d8b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   03f5b4d94ea95       busybox-5bc68d56bd-qn48b
	a54c26f9d4d79       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   d0809f35b098a       coredns-5dd5756b68-mg2zk
	575bdfec4d48a       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   56ed8378f45f8       kindnet-2hjhm
	8c4f9e3bb9920       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   0f9035fdbca1d       storage-provisioner
	069ff4f53689a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   a8807c049e40d       kube-proxy-k4jc7
	cb7709b44be66       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   670fb82b92294       kube-scheduler-multinode-544936
	430830dd54a38       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   7853068eae7cd       etcd-multinode-544936
	d93ece41c3a6f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   c49e623262974       kube-apiserver-multinode-544936
	007431211d8d9       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   370515b9cddd8       kube-controller-manager-multinode-544936
	
	
	==> coredns [a54c26f9d4d795aaa8c182e9113c7b657877e25e1e6115658ab80629d8a520e3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51850 - 31650 "HINFO IN 9040924346743955589.3783032132936657118. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012268608s
	
	
	==> describe nodes <==
	Name:               multinode-544936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-544936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=multinode-544936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_25T12_39_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 12:39:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-544936
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Dec 2023 12:53:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 12:50:29 +0000   Mon, 25 Dec 2023 12:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 12:50:29 +0000   Mon, 25 Dec 2023 12:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 12:50:29 +0000   Mon, 25 Dec 2023 12:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 12:50:29 +0000   Mon, 25 Dec 2023 12:50:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.21
	  Hostname:    multinode-544936
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c871b9a919d4357b32244d5f639b350
	  System UUID:                2c871b9a-919d-4357-b322-44d5f639b350
	  Boot ID:                    6941a24c-49e1-4f20-aff1-3c65f4767c45
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-qn48b                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-mg2zk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-544936                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-2hjhm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-544936             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-544936    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-k4jc7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-544936             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-544936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-544936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-544936 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-544936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-544936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-544936 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-544936 event: Registered Node multinode-544936 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-544936 status is now: NodeReady
	  Normal  Starting                 3m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m46s (x8 over 3m46s)  kubelet          Node multinode-544936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m46s (x8 over 3m46s)  kubelet          Node multinode-544936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m46s (x7 over 3m46s)  kubelet          Node multinode-544936 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m27s                  node-controller  Node multinode-544936 event: Registered Node multinode-544936 in Controller
	
	
	Name:               multinode-544936-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-544936-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=multinode-544936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_25T12_53_34_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 12:51:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-544936-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Dec 2023 12:53:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 12:51:52 +0000   Mon, 25 Dec 2023 12:51:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 12:51:52 +0000   Mon, 25 Dec 2023 12:51:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 12:51:52 +0000   Mon, 25 Dec 2023 12:51:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 12:51:52 +0000   Mon, 25 Dec 2023 12:51:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    multinode-544936-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 66cce375dc5741e9bc94b73c36c44956
	  System UUID:                66cce375-dc57-41e9-bc94-b73c36c44956
	  Boot ID:                    73b7258c-7460-449c-b732-339b00feed1f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-5868m    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-mjlfm               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-7z5x6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 104s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-544936-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-544936-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-544936-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-544936-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m49s                  kubelet     Node multinode-544936-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m11s (x2 over 3m11s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       108s                   kubelet     Node multinode-544936-m02 status is now: NodeNotSchedulable
	  Normal   Starting                 106s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s (x2 over 106s)    kubelet     Node multinode-544936-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x2 over 106s)    kubelet     Node multinode-544936-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x2 over 106s)    kubelet     Node multinode-544936-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  106s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                106s                   kubelet     Node multinode-544936-m02 status is now: NodeReady
	
	
	Name:               multinode-544936-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-544936-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=multinode-544936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_25T12_53_34_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 12:53:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-544936-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 12:53:33 +0000   Mon, 25 Dec 2023 12:53:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 12:53:33 +0000   Mon, 25 Dec 2023 12:53:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 12:53:33 +0000   Mon, 25 Dec 2023 12:53:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 12:53:33 +0000   Mon, 25 Dec 2023 12:53:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    multinode-544936-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 94719273df9e45979f9501979c593881
	  System UUID:                94719273-df9e-4597-9f95-01979c593881
	  Boot ID:                    00deb8d3-f186-4f4d-81e5-6b6aac0399f3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-c8v59    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kindnet-7cr8v               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-gkxgw            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-544936-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-544936-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-544936-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-544936-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-544936-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-544936-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-544936-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeNotReady             67s                kubelet     Node multinode-544936-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        37s (x2 over 97s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       6s                 kubelet     Node multinode-544936-m03 status is now: NodeNotSchedulable
	  Normal   NodeReady                6s (x2 over 11m)   kubelet     Node multinode-544936-m03 status is now: NodeReady
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-544936-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-544936-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-544936-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-544936-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec25 12:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070326] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.407989] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.602211] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154497] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.512487] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.734748] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.095687] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.146887] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.101020] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.224934] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +18.153031] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	
	
	==> etcd [430830dd54a388cc8c6ba6e63b86d0ae0046bacf9029c8b33718c76f21b974bc] <==
	{"level":"info","ts":"2023-12-25T12:49:55.320844Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-25T12:49:55.320855Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-25T12:49:55.321076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 switched to configuration voters=(4335799684680043239)"}
	{"level":"info","ts":"2023-12-25T12:49:55.321165Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f019a0e2d3e7d785","local-member-id":"3c2bdad7569acae7","added-peer-id":"3c2bdad7569acae7","added-peer-peer-urls":["https://192.168.39.21:2380"]}
	{"level":"info","ts":"2023-12-25T12:49:55.321417Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f019a0e2d3e7d785","local-member-id":"3c2bdad7569acae7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-25T12:49:55.32147Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-25T12:49:55.324741Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-25T12:49:55.324964Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"3c2bdad7569acae7","initial-advertise-peer-urls":["https://192.168.39.21:2380"],"listen-peer-urls":["https://192.168.39.21:2380"],"advertise-client-urls":["https://192.168.39.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.21:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-25T12:49:55.3251Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-25T12:49:55.325227Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.21:2380"}
	{"level":"info","ts":"2023-12-25T12:49:55.325253Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.21:2380"}
	{"level":"info","ts":"2023-12-25T12:49:56.900244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-25T12:49:56.900321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-25T12:49:56.900421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 received MsgPreVoteResp from 3c2bdad7569acae7 at term 2"}
	{"level":"info","ts":"2023-12-25T12:49:56.900438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 became candidate at term 3"}
	{"level":"info","ts":"2023-12-25T12:49:56.900443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 received MsgVoteResp from 3c2bdad7569acae7 at term 3"}
	{"level":"info","ts":"2023-12-25T12:49:56.900456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 became leader at term 3"}
	{"level":"info","ts":"2023-12-25T12:49:56.90047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3c2bdad7569acae7 elected leader 3c2bdad7569acae7 at term 3"}
	{"level":"info","ts":"2023-12-25T12:49:56.902207Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3c2bdad7569acae7","local-member-attributes":"{Name:multinode-544936 ClientURLs:[https://192.168.39.21:2379]}","request-path":"/0/members/3c2bdad7569acae7/attributes","cluster-id":"f019a0e2d3e7d785","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-25T12:49:56.902296Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-25T12:49:56.902497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-25T12:49:56.903709Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-25T12:49:56.903726Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.21:2379"}
	{"level":"info","ts":"2023-12-25T12:49:56.904478Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-25T12:49:56.904527Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:53:38 up 4 min,  0 users,  load average: 0.13, 0.25, 0.12
	Linux multinode-544936 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [575bdfec4d48a68612cad6b75bb20f9bb36c58739a4ae160976a2eff7714ef15] <==
	I1225 12:53:05.006771       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I1225 12:53:05.006819       1 main.go:227] handling current node
	I1225 12:53:05.006831       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I1225 12:53:05.006839       1 main.go:250] Node multinode-544936-m02 has CIDR [10.244.1.0/24] 
	I1225 12:53:05.006949       1 main.go:223] Handling node with IPs: map[192.168.39.54:{}]
	I1225 12:53:05.006982       1 main.go:250] Node multinode-544936-m03 has CIDR [10.244.3.0/24] 
	I1225 12:53:15.018306       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I1225 12:53:15.018429       1 main.go:227] handling current node
	I1225 12:53:15.018441       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I1225 12:53:15.018448       1 main.go:250] Node multinode-544936-m02 has CIDR [10.244.1.0/24] 
	I1225 12:53:15.018542       1 main.go:223] Handling node with IPs: map[192.168.39.54:{}]
	I1225 12:53:15.018573       1 main.go:250] Node multinode-544936-m03 has CIDR [10.244.3.0/24] 
	I1225 12:53:25.034117       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I1225 12:53:25.034189       1 main.go:227] handling current node
	I1225 12:53:25.034242       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I1225 12:53:25.034251       1 main.go:250] Node multinode-544936-m02 has CIDR [10.244.1.0/24] 
	I1225 12:53:25.034546       1 main.go:223] Handling node with IPs: map[192.168.39.54:{}]
	I1225 12:53:25.034662       1 main.go:250] Node multinode-544936-m03 has CIDR [10.244.3.0/24] 
	I1225 12:53:35.053884       1 main.go:223] Handling node with IPs: map[192.168.39.21:{}]
	I1225 12:53:35.053988       1 main.go:227] handling current node
	I1225 12:53:35.054017       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I1225 12:53:35.054035       1 main.go:250] Node multinode-544936-m02 has CIDR [10.244.1.0/24] 
	I1225 12:53:35.054151       1 main.go:223] Handling node with IPs: map[192.168.39.54:{}]
	I1225 12:53:35.054171       1 main.go:250] Node multinode-544936-m03 has CIDR [10.244.2.0/24] 
	I1225 12:53:35.054232       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.54 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [d93ece41c3a6f26870f30e1e9dc0f4d350bfa308a0903b903ec7a0654f1727d0] <==
	I1225 12:49:58.387878       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1225 12:49:58.387887       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1225 12:49:58.387929       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1225 12:49:58.388490       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1225 12:49:58.388528       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1225 12:49:58.472859       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 12:49:58.488710       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1225 12:49:58.488949       1 aggregator.go:166] initial CRD sync complete...
	I1225 12:49:58.488981       1 autoregister_controller.go:141] Starting autoregister controller
	I1225 12:49:58.488988       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 12:49:58.488995       1 cache.go:39] Caches are synced for autoregister controller
	I1225 12:49:58.514515       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1225 12:49:58.520349       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1225 12:49:58.521197       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1225 12:49:58.527469       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1225 12:49:58.527654       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1225 12:49:58.530271       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1225 12:49:58.531185       1 shared_informer.go:318] Caches are synced for configmaps
	I1225 12:49:59.329273       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 12:50:01.500065       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1225 12:50:01.657730       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1225 12:50:01.671672       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1225 12:50:01.745933       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 12:50:01.754793       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 12:50:48.947814       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [007431211d8d9ef136ae41d34947868f92335766e50519588412580572dd4716] <==
	I1225 12:51:51.459020       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-544936-m03"
	I1225 12:51:52.191021       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-544936-m02\" does not exist"
	I1225 12:51:52.197129       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-544936-m03"
	I1225 12:51:52.192041       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-z5f74" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-z5f74"
	I1225 12:51:52.224543       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-544936-m02" podCIDRs=["10.244.1.0/24"]
	I1225 12:51:52.356728       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-544936-m02"
	I1225 12:51:53.135582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="120.928µs"
	I1225 12:52:06.351833       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.508µs"
	I1225 12:52:06.959734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="77.36µs"
	I1225 12:52:06.962439       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="77.333µs"
	I1225 12:52:31.787675       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-544936-m02"
	I1225 12:53:30.062135       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-5868m"
	I1225 12:53:30.074684       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.712897ms"
	I1225 12:53:30.106689       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="31.903364ms"
	I1225 12:53:30.106824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.042µs"
	I1225 12:53:31.231697       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.517601ms"
	I1225 12:53:31.231904       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="116.41µs"
	I1225 12:53:32.867919       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-544936-m02"
	I1225 12:53:33.079285       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-544936-m02"
	I1225 12:53:33.741189       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-544936-m02"
	I1225 12:53:33.741632       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-544936-m03\" does not exist"
	I1225 12:53:33.746179       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-c8v59" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-c8v59"
	I1225 12:53:33.757745       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-544936-m03" podCIDRs=["10.244.2.0/24"]
	I1225 12:53:33.888239       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-544936-m02"
	I1225 12:53:34.640434       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="139.757µs"
	
	
	==> kube-proxy [069ff4f53689aaba298ff7826edc71bb16d092c663bcec90ebe3c67ec4affe94] <==
	I1225 12:50:00.838006       1 server_others.go:69] "Using iptables proxy"
	I1225 12:50:00.874058       1 node.go:141] Successfully retrieved node IP: 192.168.39.21
	I1225 12:50:01.135204       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1225 12:50:01.135562       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1225 12:50:01.140841       1 server_others.go:152] "Using iptables Proxier"
	I1225 12:50:01.140965       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1225 12:50:01.141627       1 server.go:846] "Version info" version="v1.28.4"
	I1225 12:50:01.141922       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 12:50:01.142835       1 config.go:188] "Starting service config controller"
	I1225 12:50:01.142932       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1225 12:50:01.143069       1 config.go:97] "Starting endpoint slice config controller"
	I1225 12:50:01.143094       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1225 12:50:01.146042       1 config.go:315] "Starting node config controller"
	I1225 12:50:01.146088       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1225 12:50:01.243516       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1225 12:50:01.243634       1 shared_informer.go:318] Caches are synced for service config
	I1225 12:50:01.246783       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [cb7709b44be66229292dd2c63b6d7a3603e5ff9803db26441ccd8eac757ae4d5] <==
	I1225 12:49:55.758669       1 serving.go:348] Generated self-signed cert in-memory
	W1225 12:49:58.420840       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 12:49:58.420887       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 12:49:58.420898       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 12:49:58.420904       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 12:49:58.492755       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1225 12:49:58.492821       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 12:49:58.495993       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 12:49:58.496046       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1225 12:49:58.496615       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1225 12:49:58.496673       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1225 12:49:58.596889       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2023-12-25 12:49:24 UTC, ends at Mon 2023-12-25 12:53:39 UTC. --
	Dec 25 12:50:00 multinode-544936 kubelet[917]: E1225 12:50:00.788688     917 projected.go:198] Error preparing data for projected volume kube-api-access-brmj5 for pod default/busybox-5bc68d56bd-qn48b: object "default"/"kube-root-ca.crt" not registered
	Dec 25 12:50:00 multinode-544936 kubelet[917]: E1225 12:50:00.788743     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91cf6ac2-2bc3-4049-aaed-7863759e58da-kube-api-access-brmj5 podName:91cf6ac2-2bc3-4049-aaed-7863759e58da nodeName:}" failed. No retries permitted until 2023-12-25 12:50:02.788727847 +0000 UTC m=+10.893466528 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-brmj5" (UniqueName: "kubernetes.io/projected/91cf6ac2-2bc3-4049-aaed-7863759e58da-kube-api-access-brmj5") pod "busybox-5bc68d56bd-qn48b" (UID: "91cf6ac2-2bc3-4049-aaed-7863759e58da") : object "default"/"kube-root-ca.crt" not registered
	Dec 25 12:50:01 multinode-544936 kubelet[917]: E1225 12:50:01.192545     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-mg2zk" podUID="4f4e21f4-8e73-4b81-a080-c42b6980ee3b"
	Dec 25 12:50:01 multinode-544936 kubelet[917]: E1225 12:50:01.192925     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-qn48b" podUID="91cf6ac2-2bc3-4049-aaed-7863759e58da"
	Dec 25 12:50:02 multinode-544936 kubelet[917]: E1225 12:50:02.705846     917 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 25 12:50:02 multinode-544936 kubelet[917]: E1225 12:50:02.705919     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f4e21f4-8e73-4b81-a080-c42b6980ee3b-config-volume podName:4f4e21f4-8e73-4b81-a080-c42b6980ee3b nodeName:}" failed. No retries permitted until 2023-12-25 12:50:06.705902189 +0000 UTC m=+14.810640875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4f4e21f4-8e73-4b81-a080-c42b6980ee3b-config-volume") pod "coredns-5dd5756b68-mg2zk" (UID: "4f4e21f4-8e73-4b81-a080-c42b6980ee3b") : object "kube-system"/"coredns" not registered
	Dec 25 12:50:02 multinode-544936 kubelet[917]: E1225 12:50:02.806668     917 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 25 12:50:02 multinode-544936 kubelet[917]: E1225 12:50:02.806726     917 projected.go:198] Error preparing data for projected volume kube-api-access-brmj5 for pod default/busybox-5bc68d56bd-qn48b: object "default"/"kube-root-ca.crt" not registered
	Dec 25 12:50:02 multinode-544936 kubelet[917]: E1225 12:50:02.806776     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91cf6ac2-2bc3-4049-aaed-7863759e58da-kube-api-access-brmj5 podName:91cf6ac2-2bc3-4049-aaed-7863759e58da nodeName:}" failed. No retries permitted until 2023-12-25 12:50:06.806763268 +0000 UTC m=+14.911501936 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-brmj5" (UniqueName: "kubernetes.io/projected/91cf6ac2-2bc3-4049-aaed-7863759e58da-kube-api-access-brmj5") pod "busybox-5bc68d56bd-qn48b" (UID: "91cf6ac2-2bc3-4049-aaed-7863759e58da") : object "default"/"kube-root-ca.crt" not registered
	Dec 25 12:50:03 multinode-544936 kubelet[917]: E1225 12:50:03.193049     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-qn48b" podUID="91cf6ac2-2bc3-4049-aaed-7863759e58da"
	Dec 25 12:50:03 multinode-544936 kubelet[917]: E1225 12:50:03.193137     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-mg2zk" podUID="4f4e21f4-8e73-4b81-a080-c42b6980ee3b"
	Dec 25 12:50:04 multinode-544936 kubelet[917]: I1225 12:50:04.933217     917 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 25 12:50:31 multinode-544936 kubelet[917]: I1225 12:50:31.385815     917 scope.go:117] "RemoveContainer" containerID="8c4f9e3bb9920be2fead0a328c42865d7676d1acc0d84b727300626005938999"
	Dec 25 12:50:52 multinode-544936 kubelet[917]: E1225 12:50:52.214926     917 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 12:50:52 multinode-544936 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 12:50:52 multinode-544936 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 12:50:52 multinode-544936 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 12:51:52 multinode-544936 kubelet[917]: E1225 12:51:52.218736     917 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 12:51:52 multinode-544936 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 12:51:52 multinode-544936 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 12:51:52 multinode-544936 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 12:52:52 multinode-544936 kubelet[917]: E1225 12:52:52.214853     917 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 12:52:52 multinode-544936 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 12:52:52 multinode-544936 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 12:52:52 multinode-544936 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-544936 -n multinode-544936
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-544936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (687.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 stop
E1225 12:53:56.706090 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:54:07.347746 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-544936 stop: exit status 82 (2m0.967278473s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-544936"  ...
	* Stopping node "multinode-544936"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-544936 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-544936 status: exit status 3 (18.847876793s)

                                                
                                                
-- stdout --
	multinode-544936
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-544936-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 12:56:01.454843 1468850 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host
	E1225 12:56:01.454886 1468850 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-544936 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-544936 -n multinode-544936
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-544936 -n multinode-544936: exit status 3 (3.185941226s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 12:56:04.814923 1468942 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host
	E1225 12:56:04.814949 1468942 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-544936" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.00s)

                                                
                                    
x
+
TestPreload (280.73s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-888646 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1225 13:06:26.362774 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-888646 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m19.062445783s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-888646 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-888646 image pull gcr.io/k8s-minikube/busybox: (1.187036478s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-888646
E1225 13:06:59.758330 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-888646: exit status 82 (2m0.953065508s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-888646"  ...
	* Stopping node "test-preload-888646"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-888646 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2023-12-25 13:08:46.499114468 +0000 UTC m=+3151.117591504
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-888646 -n test-preload-888646
E1225 13:08:56.706931 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-888646 -n test-preload-888646: exit status 3 (18.577275656s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:09:05.070855 1471940 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host
	E1225 13:09:05.070880 1471940 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-888646" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-888646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-888646
--- FAIL: TestPreload (280.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (172.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.3889749311.exe start -p running-upgrade-941659 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1225 13:11:26.362849 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.3889749311.exe start -p running-upgrade-941659 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m14.732343173s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-941659 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-941659 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (35.284622071s)

                                                
                                                
-- stdout --
	* [running-upgrade-941659] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-941659 in cluster running-upgrade-941659
	* Updating the running kvm2 "running-upgrade-941659" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 13:13:25.358611 1476830 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:13:25.358858 1476830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:13:25.358866 1476830 out.go:309] Setting ErrFile to fd 2...
	I1225 13:13:25.358871 1476830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:13:25.359109 1476830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:13:25.359738 1476830 out.go:303] Setting JSON to false
	I1225 13:13:25.360722 1476830 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":158159,"bootTime":1703351847,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 13:13:25.360793 1476830 start.go:138] virtualization: kvm guest
	I1225 13:13:25.363094 1476830 out.go:177] * [running-upgrade-941659] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 13:13:25.364418 1476830 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 13:13:25.365731 1476830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 13:13:25.364491 1476830 notify.go:220] Checking for updates...
	I1225 13:13:25.367240 1476830 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:13:25.368553 1476830 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:13:25.369835 1476830 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 13:13:25.371126 1476830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 13:13:25.372845 1476830 config.go:182] Loaded profile config "running-upgrade-941659": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1225 13:13:25.372863 1476830 start_flags.go:694] config upgrade: Driver=kvm2
	I1225 13:13:25.372872 1476830 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1225 13:13:25.372951 1476830 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/running-upgrade-941659/config.json ...
	I1225 13:13:25.373507 1476830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:13:25.373615 1476830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:13:25.389257 1476830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43079
	I1225 13:13:25.389698 1476830 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:13:25.390316 1476830 main.go:141] libmachine: Using API Version  1
	I1225 13:13:25.390343 1476830 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:13:25.390761 1476830 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:13:25.390992 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .DriverName
	I1225 13:13:25.392949 1476830 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1225 13:13:25.394288 1476830 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 13:13:25.394818 1476830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:13:25.394878 1476830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:13:25.411413 1476830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35909
	I1225 13:13:25.411957 1476830 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:13:25.412551 1476830 main.go:141] libmachine: Using API Version  1
	I1225 13:13:25.412573 1476830 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:13:25.412985 1476830 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:13:25.413201 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .DriverName
	I1225 13:13:25.450742 1476830 out.go:177] * Using the kvm2 driver based on existing profile
	I1225 13:13:25.452098 1476830 start.go:298] selected driver: kvm2
	I1225 13:13:25.452121 1476830 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-941659 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.182 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1225 13:13:25.452263 1476830 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 13:13:25.452968 1476830 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:13:25.453058 1476830 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 13:13:25.469119 1476830 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 13:13:25.469512 1476830 cni.go:84] Creating CNI manager for ""
	I1225 13:13:25.469534 1476830 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1225 13:13:25.469545 1476830 start_flags.go:323] config:
	{Name:running-upgrade-941659 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.182 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1225 13:13:25.469752 1476830 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:13:25.471741 1476830 out.go:177] * Starting control plane node running-upgrade-941659 in cluster running-upgrade-941659
	I1225 13:13:25.473139 1476830 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1225 13:13:25.508198 1476830 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1225 13:13:25.508339 1476830 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/running-upgrade-941659/config.json ...
	I1225 13:13:25.508450 1476830 cache.go:107] acquiring lock: {Name:mk6dc908dcb2275a8df4a7f4dec3f9e0c365632b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:13:25.508496 1476830 cache.go:107] acquiring lock: {Name:mk2ccd0947adc10d222d382203fbc5126ce6b3f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:13:25.508567 1476830 cache.go:115] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1225 13:13:25.508584 1476830 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 150.05µs
	I1225 13:13:25.508501 1476830 cache.go:107] acquiring lock: {Name:mk2dbbac1ab6e42b84b3f6c34367f3040caef1ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:13:25.508992 1476830 cache.go:107] acquiring lock: {Name:mke21e684b56044f97fa65348cb53eebc849181d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:13:25.509028 1476830 cache.go:107] acquiring lock: {Name:mk7c93947dc1a57051ff2736e4f3cc5fcb23c2ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:13:25.509043 1476830 cache.go:107] acquiring lock: {Name:mkaa024757b443c3912ee7326a954eeeda238921 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:13:25.509118 1476830 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1225 13:13:25.508617 1476830 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1225 13:13:25.509208 1476830 start.go:365] acquiring machines lock for running-upgrade-941659: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:13:25.508456 1476830 cache.go:107] acquiring lock: {Name:mk1c5db74464a895edf7289afa1e96ecc8af8cc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:13:25.509253 1476830 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1225 13:13:25.509288 1476830 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1225 13:13:25.509301 1476830 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1225 13:13:25.509411 1476830 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1225 13:13:25.509020 1476830 cache.go:107] acquiring lock: {Name:mkb545c57243b207eb07e25cd0bc84b6ebf2fb7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:13:25.509646 1476830 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1225 13:13:25.509831 1476830 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1225 13:13:25.510526 1476830 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1225 13:13:25.510629 1476830 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1225 13:13:25.510809 1476830 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1225 13:13:25.510802 1476830 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1225 13:13:25.510684 1476830 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1225 13:13:25.510880 1476830 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1225 13:13:25.510989 1476830 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1225 13:13:25.681507 1476830 cache.go:162] opening:  /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1225 13:13:25.688701 1476830 cache.go:162] opening:  /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1225 13:13:25.691782 1476830 cache.go:162] opening:  /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1225 13:13:25.718828 1476830 cache.go:162] opening:  /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1225 13:13:25.719267 1476830 cache.go:162] opening:  /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1225 13:13:25.765862 1476830 cache.go:162] opening:  /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1225 13:13:25.766254 1476830 cache.go:157] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1225 13:13:25.766284 1476830 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 257.279308ms
	I1225 13:13:25.766300 1476830 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1225 13:13:25.772874 1476830 cache.go:162] opening:  /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1225 13:13:26.130365 1476830 cache.go:157] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1225 13:13:26.130400 1476830 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 621.397575ms
	I1225 13:13:26.130417 1476830 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1225 13:13:26.464813 1476830 cache.go:157] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1225 13:13:26.464855 1476830 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 956.359061ms
	I1225 13:13:26.464873 1476830 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1225 13:13:26.839733 1476830 cache.go:157] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1225 13:13:26.839763 1476830 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.331266066s
	I1225 13:13:26.839775 1476830 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1225 13:13:26.889322 1476830 cache.go:157] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1225 13:13:26.889360 1476830 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.380481458s
	I1225 13:13:26.889377 1476830 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1225 13:13:27.245666 1476830 cache.go:157] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1225 13:13:27.245696 1476830 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.736685223s
	I1225 13:13:27.245708 1476830 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1225 13:13:27.550412 1476830 cache.go:157] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1225 13:13:27.550459 1476830 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.042011039s
	I1225 13:13:27.550476 1476830 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1225 13:13:27.550501 1476830 cache.go:87] Successfully saved all images to host disk.
	I1225 13:13:56.571817 1476830 start.go:369] acquired machines lock for "running-upgrade-941659" in 31.062569423s
	I1225 13:13:56.571895 1476830 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:13:56.571902 1476830 fix.go:54] fixHost starting: minikube
	I1225 13:13:56.572208 1476830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:13:56.572241 1476830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:13:56.588332 1476830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I1225 13:13:56.588825 1476830 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:13:56.589359 1476830 main.go:141] libmachine: Using API Version  1
	I1225 13:13:56.589387 1476830 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:13:56.589731 1476830 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:13:56.589931 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .DriverName
	I1225 13:13:56.590090 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetState
	I1225 13:13:56.594177 1476830 fix.go:102] recreateIfNeeded on running-upgrade-941659: state=Running err=<nil>
	W1225 13:13:56.594227 1476830 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:13:56.596056 1476830 out.go:177] * Updating the running kvm2 "running-upgrade-941659" VM ...
	I1225 13:13:56.597439 1476830 machine.go:88] provisioning docker machine ...
	I1225 13:13:56.597494 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .DriverName
	I1225 13:13:56.597815 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetMachineName
	I1225 13:13:56.598018 1476830 buildroot.go:166] provisioning hostname "running-upgrade-941659"
	I1225 13:13:56.598043 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetMachineName
	I1225 13:13:56.598209 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHHostname
	I1225 13:13:56.604269 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:56.604729 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fb:53", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-25 14:11:42 +0000 UTC Type:0 Mac:52:54:00:3a:fb:53 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:running-upgrade-941659 Clientid:01:52:54:00:3a:fb:53}
	I1225 13:13:56.604755 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined IP address 192.168.50.182 and MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:56.606333 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHPort
	I1225 13:13:56.606589 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHKeyPath
	I1225 13:13:56.606784 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHKeyPath
	I1225 13:13:56.606946 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHUsername
	I1225 13:13:56.607183 1476830 main.go:141] libmachine: Using SSH client type: native
	I1225 13:13:56.607698 1476830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I1225 13:13:56.607719 1476830 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-941659 && echo "running-upgrade-941659" | sudo tee /etc/hostname
	I1225 13:13:56.740792 1476830 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-941659
	
	I1225 13:13:56.740838 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHHostname
	I1225 13:13:57.288598 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:57.289144 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fb:53", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-25 14:11:42 +0000 UTC Type:0 Mac:52:54:00:3a:fb:53 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:running-upgrade-941659 Clientid:01:52:54:00:3a:fb:53}
	I1225 13:13:57.289196 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined IP address 192.168.50.182 and MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:57.289379 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHPort
	I1225 13:13:57.289587 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHKeyPath
	I1225 13:13:57.289771 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHKeyPath
	I1225 13:13:57.289969 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHUsername
	I1225 13:13:57.290266 1476830 main.go:141] libmachine: Using SSH client type: native
	I1225 13:13:57.290762 1476830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I1225 13:13:57.290790 1476830 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-941659' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-941659/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-941659' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:13:57.408200 1476830 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:13:57.408240 1476830 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:13:57.408309 1476830 buildroot.go:174] setting up certificates
	I1225 13:13:57.408328 1476830 provision.go:83] configureAuth start
	I1225 13:13:57.408346 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetMachineName
	I1225 13:13:57.408682 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetIP
	I1225 13:13:57.412021 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:57.412472 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fb:53", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-25 14:11:42 +0000 UTC Type:0 Mac:52:54:00:3a:fb:53 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:running-upgrade-941659 Clientid:01:52:54:00:3a:fb:53}
	I1225 13:13:57.412503 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined IP address 192.168.50.182 and MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:57.412624 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHHostname
	I1225 13:13:57.415300 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:57.415767 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fb:53", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-25 14:11:42 +0000 UTC Type:0 Mac:52:54:00:3a:fb:53 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:running-upgrade-941659 Clientid:01:52:54:00:3a:fb:53}
	I1225 13:13:57.415799 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined IP address 192.168.50.182 and MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:57.416001 1476830 provision.go:138] copyHostCerts
	I1225 13:13:57.416102 1476830 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:13:57.416118 1476830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:13:57.416208 1476830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:13:57.416349 1476830 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:13:57.416362 1476830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:13:57.416395 1476830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:13:57.416485 1476830 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:13:57.416498 1476830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:13:57.416526 1476830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:13:57.416625 1476830 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-941659 san=[192.168.50.182 192.168.50.182 localhost 127.0.0.1 minikube running-upgrade-941659]
	I1225 13:13:57.693761 1476830 provision.go:172] copyRemoteCerts
	I1225 13:13:57.693831 1476830 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:13:57.693858 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHHostname
	I1225 13:13:57.697503 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:57.697931 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fb:53", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-25 14:11:42 +0000 UTC Type:0 Mac:52:54:00:3a:fb:53 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:running-upgrade-941659 Clientid:01:52:54:00:3a:fb:53}
	I1225 13:13:57.697968 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined IP address 192.168.50.182 and MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:57.698205 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHPort
	I1225 13:13:57.698457 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHKeyPath
	I1225 13:13:57.698635 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHUsername
	I1225 13:13:57.698835 1476830 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/running-upgrade-941659/id_rsa Username:docker}
	I1225 13:13:57.786118 1476830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:13:57.802979 1476830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1225 13:13:57.819573 1476830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:13:57.840542 1476830 provision.go:86] duration metric: configureAuth took 432.191924ms
	I1225 13:13:57.840593 1476830 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:13:57.840874 1476830 config.go:182] Loaded profile config "running-upgrade-941659": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1225 13:13:57.841014 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHHostname
	I1225 13:13:57.845051 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:57.845552 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fb:53", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-25 14:11:42 +0000 UTC Type:0 Mac:52:54:00:3a:fb:53 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:running-upgrade-941659 Clientid:01:52:54:00:3a:fb:53}
	I1225 13:13:57.845579 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined IP address 192.168.50.182 and MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:57.845822 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHPort
	I1225 13:13:57.846082 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHKeyPath
	I1225 13:13:57.846335 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHKeyPath
	I1225 13:13:57.846544 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHUsername
	I1225 13:13:57.846770 1476830 main.go:141] libmachine: Using SSH client type: native
	I1225 13:13:57.847200 1476830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I1225 13:13:57.847229 1476830 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:13:58.415588 1476830 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:13:58.415615 1476830 machine.go:91] provisioned docker machine in 1.818139642s
	I1225 13:13:58.415625 1476830 start.go:300] post-start starting for "running-upgrade-941659" (driver="kvm2")
	I1225 13:13:58.415636 1476830 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:13:58.415653 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .DriverName
	I1225 13:13:58.415966 1476830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:13:58.415999 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHHostname
	I1225 13:13:58.418835 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:58.419202 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fb:53", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-25 14:11:42 +0000 UTC Type:0 Mac:52:54:00:3a:fb:53 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:running-upgrade-941659 Clientid:01:52:54:00:3a:fb:53}
	I1225 13:13:58.419233 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined IP address 192.168.50.182 and MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:58.419406 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHPort
	I1225 13:13:58.419614 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHKeyPath
	I1225 13:13:58.419765 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHUsername
	I1225 13:13:58.419897 1476830 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/running-upgrade-941659/id_rsa Username:docker}
	I1225 13:13:58.505608 1476830 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:13:58.511693 1476830 info.go:137] Remote host: Buildroot 2019.02.7
	I1225 13:13:58.511730 1476830 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:13:58.511851 1476830 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:13:58.512094 1476830 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:13:58.512273 1476830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:13:58.519365 1476830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:13:58.536490 1476830 start.go:303] post-start completed in 120.850127ms
	I1225 13:13:58.536518 1476830 fix.go:56] fixHost completed within 1.964616716s
	I1225 13:13:58.536542 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHHostname
	I1225 13:13:58.539632 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:58.540013 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fb:53", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-25 14:11:42 +0000 UTC Type:0 Mac:52:54:00:3a:fb:53 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:running-upgrade-941659 Clientid:01:52:54:00:3a:fb:53}
	I1225 13:13:58.540069 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined IP address 192.168.50.182 and MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:58.540323 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHPort
	I1225 13:13:58.540569 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHKeyPath
	I1225 13:13:58.540751 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHKeyPath
	I1225 13:13:58.540909 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHUsername
	I1225 13:13:58.541079 1476830 main.go:141] libmachine: Using SSH client type: native
	I1225 13:13:58.541468 1476830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I1225 13:13:58.541482 1476830 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1225 13:13:58.663538 1476830 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510038.659765148
	
	I1225 13:13:58.663562 1476830 fix.go:206] guest clock: 1703510038.659765148
	I1225 13:13:58.663569 1476830 fix.go:219] Guest: 2023-12-25 13:13:58.659765148 +0000 UTC Remote: 2023-12-25 13:13:58.536522162 +0000 UTC m=+33.231810730 (delta=123.242986ms)
	I1225 13:13:58.663588 1476830 fix.go:190] guest clock delta is within tolerance: 123.242986ms
	I1225 13:13:58.663593 1476830 start.go:83] releasing machines lock for "running-upgrade-941659", held for 2.091725414s
	I1225 13:13:58.663619 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .DriverName
	I1225 13:13:58.663950 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetIP
	I1225 13:13:58.667330 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:58.667755 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fb:53", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-25 14:11:42 +0000 UTC Type:0 Mac:52:54:00:3a:fb:53 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:running-upgrade-941659 Clientid:01:52:54:00:3a:fb:53}
	I1225 13:13:58.667785 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined IP address 192.168.50.182 and MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:58.667979 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .DriverName
	I1225 13:13:58.668666 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .DriverName
	I1225 13:13:58.668883 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .DriverName
	I1225 13:13:58.668998 1476830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:13:58.669048 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHHostname
	I1225 13:13:58.669137 1476830 ssh_runner.go:195] Run: cat /version.json
	I1225 13:13:58.669163 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHHostname
	I1225 13:13:58.672666 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:58.672702 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:58.673060 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fb:53", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-25 14:11:42 +0000 UTC Type:0 Mac:52:54:00:3a:fb:53 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:running-upgrade-941659 Clientid:01:52:54:00:3a:fb:53}
	I1225 13:13:58.673098 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined IP address 192.168.50.182 and MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:58.673408 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fb:53", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-25 14:11:42 +0000 UTC Type:0 Mac:52:54:00:3a:fb:53 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:running-upgrade-941659 Clientid:01:52:54:00:3a:fb:53}
	I1225 13:13:58.673433 1476830 main.go:141] libmachine: (running-upgrade-941659) DBG | domain running-upgrade-941659 has defined IP address 192.168.50.182 and MAC address 52:54:00:3a:fb:53 in network minikube-net
	I1225 13:13:58.673616 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHPort
	I1225 13:13:58.673807 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHPort
	I1225 13:13:58.673856 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHKeyPath
	I1225 13:13:58.673968 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHKeyPath
	I1225 13:13:58.674151 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHUsername
	I1225 13:13:58.674191 1476830 main.go:141] libmachine: (running-upgrade-941659) Calling .GetSSHUsername
	I1225 13:13:58.674337 1476830 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/running-upgrade-941659/id_rsa Username:docker}
	I1225 13:13:58.674384 1476830 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/running-upgrade-941659/id_rsa Username:docker}
	W1225 13:13:58.789587 1476830 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1225 13:13:58.789681 1476830 ssh_runner.go:195] Run: systemctl --version
	I1225 13:13:58.795928 1476830 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:13:58.878356 1476830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:13:58.886550 1476830 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:13:58.886635 1476830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:13:58.893166 1476830 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 13:13:58.893210 1476830 start.go:475] detecting cgroup driver to use...
	I1225 13:13:58.893293 1476830 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:13:58.908162 1476830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:13:58.919353 1476830 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:13:58.919426 1476830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:13:58.940035 1476830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:13:58.951528 1476830 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1225 13:13:58.964030 1476830 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1225 13:13:58.964112 1476830 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:13:59.112100 1476830 docker.go:219] disabling docker service ...
	I1225 13:13:59.112196 1476830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:14:00.134177 1476830 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.021945525s)
	I1225 13:14:00.134247 1476830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:14:00.151056 1476830 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:14:00.312289 1476830 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:14:00.535816 1476830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:14:00.547660 1476830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:14:00.560921 1476830 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1225 13:14:00.561002 1476830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:14:00.571235 1476830 out.go:177] 
	W1225 13:14:00.572768 1476830 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1225 13:14:00.572793 1476830 out.go:239] * 
	* 
	W1225 13:14:00.573853 1476830 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 13:14:00.575621 1476830 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-941659 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-25 13:14:00.602489018 +0000 UTC m=+3465.220966046
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-941659 -n running-upgrade-941659
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-941659 -n running-upgrade-941659: exit status 4 (288.536072ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:14:00.849347 1477550 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-941659" does not appear in /home/jenkins/minikube-integration/17847-1442600/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-941659" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-941659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-941659
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-941659: (1.51353975s)
--- FAIL: TestRunningBinaryUpgrade (172.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (306.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.2254011439.exe start -p stopped-upgrade-176938 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.2254011439.exe start -p stopped-upgrade-176938 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m12.3670081s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.2254011439.exe -p stopped-upgrade-176938 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.2254011439.exe -p stopped-upgrade-176938 stop: (1m32.772912135s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-176938 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1225 13:19:07.348376 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-176938 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m21.305161558s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-176938] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-176938 in cluster stopped-upgrade-176938
	* Restarting existing kvm2 VM for "stopped-upgrade-176938" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 13:19:00.188549 1481343 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:19:00.188834 1481343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:19:00.188845 1481343 out.go:309] Setting ErrFile to fd 2...
	I1225 13:19:00.188849 1481343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:19:00.189015 1481343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:19:00.189653 1481343 out.go:303] Setting JSON to false
	I1225 13:19:00.190772 1481343 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":158493,"bootTime":1703351847,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 13:19:00.190844 1481343 start.go:138] virtualization: kvm guest
	I1225 13:19:00.193070 1481343 out.go:177] * [stopped-upgrade-176938] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 13:19:00.194606 1481343 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 13:19:00.194680 1481343 notify.go:220] Checking for updates...
	I1225 13:19:00.195971 1481343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 13:19:00.197522 1481343 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:19:00.198856 1481343 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:19:00.200159 1481343 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 13:19:00.201510 1481343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 13:19:00.203124 1481343 config.go:182] Loaded profile config "stopped-upgrade-176938": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1225 13:19:00.203149 1481343 start_flags.go:694] config upgrade: Driver=kvm2
	I1225 13:19:00.203159 1481343 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1225 13:19:00.203245 1481343 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/stopped-upgrade-176938/config.json ...
	I1225 13:19:00.203945 1481343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:19:00.204001 1481343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:19:00.221454 1481343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34973
	I1225 13:19:00.221860 1481343 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:19:00.222430 1481343 main.go:141] libmachine: Using API Version  1
	I1225 13:19:00.222481 1481343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:19:00.222897 1481343 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:19:00.223075 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .DriverName
	I1225 13:19:00.225319 1481343 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1225 13:19:00.226547 1481343 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 13:19:00.226926 1481343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:19:00.226975 1481343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:19:00.243476 1481343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I1225 13:19:00.243948 1481343 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:19:00.244486 1481343 main.go:141] libmachine: Using API Version  1
	I1225 13:19:00.244518 1481343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:19:00.244860 1481343 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:19:00.245050 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .DriverName
	I1225 13:19:00.286914 1481343 out.go:177] * Using the kvm2 driver based on existing profile
	I1225 13:19:00.288379 1481343 start.go:298] selected driver: kvm2
	I1225 13:19:00.288400 1481343 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-176938 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.61.5 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1225 13:19:00.288550 1481343 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 13:19:00.289436 1481343 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:19:00.289532 1481343 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 13:19:00.306335 1481343 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 13:19:00.306739 1481343 cni.go:84] Creating CNI manager for ""
	I1225 13:19:00.306759 1481343 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1225 13:19:00.306770 1481343 start_flags.go:323] config:
	{Name:stopped-upgrade-176938 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.61.5 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1225 13:19:00.306937 1481343 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:19:00.308965 1481343 out.go:177] * Starting control plane node stopped-upgrade-176938 in cluster stopped-upgrade-176938
	I1225 13:19:00.310466 1481343 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1225 13:19:00.332965 1481343 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1225 13:19:00.333118 1481343 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/stopped-upgrade-176938/config.json ...
	I1225 13:19:00.333336 1481343 cache.go:107] acquiring lock: {Name:mk6dc908dcb2275a8df4a7f4dec3f9e0c365632b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:19:00.333391 1481343 cache.go:107] acquiring lock: {Name:mkb545c57243b207eb07e25cd0bc84b6ebf2fb7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:19:00.333331 1481343 cache.go:107] acquiring lock: {Name:mk2ccd0947adc10d222d382203fbc5126ce6b3f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:19:00.333424 1481343 cache.go:107] acquiring lock: {Name:mk7c93947dc1a57051ff2736e4f3cc5fcb23c2ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:19:00.333420 1481343 cache.go:107] acquiring lock: {Name:mkaa024757b443c3912ee7326a954eeeda238921 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:19:00.333859 1481343 start.go:365] acquiring machines lock for stopped-upgrade-176938: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:19:00.333877 1481343 cache.go:107] acquiring lock: {Name:mk1c5db74464a895edf7289afa1e96ecc8af8cc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:19:00.333927 1481343 cache.go:115] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1225 13:19:00.333947 1481343 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 565.257µs
	I1225 13:19:00.333983 1481343 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1225 13:19:00.333983 1481343 cache.go:115] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1225 13:19:00.334005 1481343 cache.go:115] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1225 13:19:00.334029 1481343 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 620.429µs
	I1225 13:19:00.334043 1481343 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1225 13:19:00.334013 1481343 cache.go:115] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1225 13:19:00.333377 1481343 cache.go:107] acquiring lock: {Name:mke21e684b56044f97fa65348cb53eebc849181d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:19:00.334008 1481343 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 586.141µs
	I1225 13:19:00.334166 1481343 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1225 13:19:00.333978 1481343 cache.go:115] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1225 13:19:00.334070 1481343 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 786.579µs
	I1225 13:19:00.334199 1481343 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1225 13:19:00.334196 1481343 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 914.002µs
	I1225 13:19:00.334214 1481343 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1225 13:19:00.333337 1481343 cache.go:107] acquiring lock: {Name:mk2dbbac1ab6e42b84b3f6c34367f3040caef1ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:19:00.334258 1481343 cache.go:115] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1225 13:19:00.334270 1481343 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 970.731µs
	I1225 13:19:00.334287 1481343 cache.go:115] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1225 13:19:00.334290 1481343 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1225 13:19:00.334258 1481343 cache.go:115] /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1225 13:19:00.334297 1481343 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 923.338µs
	I1225 13:19:00.334302 1481343 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 918.71µs
	I1225 13:19:00.334311 1481343 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1225 13:19:00.334312 1481343 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1225 13:19:00.334320 1481343 cache.go:87] Successfully saved all images to host disk.
	I1225 13:19:40.551547 1481343 start.go:369] acquired machines lock for "stopped-upgrade-176938" in 40.217627048s
	I1225 13:19:40.551613 1481343 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:19:40.551629 1481343 fix.go:54] fixHost starting: minikube
	I1225 13:19:40.552050 1481343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:19:40.552113 1481343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:19:40.569447 1481343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42915
	I1225 13:19:40.569901 1481343 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:19:40.570485 1481343 main.go:141] libmachine: Using API Version  1
	I1225 13:19:40.570524 1481343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:19:40.570916 1481343 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:19:40.571166 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .DriverName
	I1225 13:19:40.571331 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetState
	I1225 13:19:40.573133 1481343 fix.go:102] recreateIfNeeded on stopped-upgrade-176938: state=Stopped err=<nil>
	I1225 13:19:40.573165 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .DriverName
	W1225 13:19:40.573365 1481343 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:19:40.575472 1481343 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-176938" ...
	I1225 13:19:40.576958 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .Start
	I1225 13:19:40.577175 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Ensuring networks are active...
	I1225 13:19:40.578007 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Ensuring network default is active
	I1225 13:19:40.578454 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Ensuring network minikube-net is active
	I1225 13:19:40.578808 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Getting domain xml...
	I1225 13:19:40.579531 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Creating domain...
	I1225 13:19:41.917496 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Waiting to get IP...
	I1225 13:19:41.918554 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:41.919136 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:41.919268 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:41.919124 1481646 retry.go:31] will retry after 232.963873ms: waiting for machine to come up
	I1225 13:19:42.153799 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:42.154416 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:42.154464 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:42.154364 1481646 retry.go:31] will retry after 286.403024ms: waiting for machine to come up
	I1225 13:19:42.442974 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:42.443638 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:42.443702 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:42.443602 1481646 retry.go:31] will retry after 404.215144ms: waiting for machine to come up
	I1225 13:19:42.849305 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:42.849875 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:42.849908 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:42.849832 1481646 retry.go:31] will retry after 499.108422ms: waiting for machine to come up
	I1225 13:19:43.350688 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:43.351266 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:43.351297 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:43.351208 1481646 retry.go:31] will retry after 474.837352ms: waiting for machine to come up
	I1225 13:19:43.828124 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:43.828685 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:43.828712 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:43.828613 1481646 retry.go:31] will retry after 849.712515ms: waiting for machine to come up
	I1225 13:19:44.679495 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:44.680195 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:44.680232 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:44.680122 1481646 retry.go:31] will retry after 1.101187091s: waiting for machine to come up
	I1225 13:19:45.783257 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:45.783871 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:45.783936 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:45.783816 1481646 retry.go:31] will retry after 1.091791023s: waiting for machine to come up
	I1225 13:19:46.877135 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:46.877692 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:46.877732 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:46.877636 1481646 retry.go:31] will retry after 1.534657804s: waiting for machine to come up
	I1225 13:19:48.414618 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:48.415219 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:48.415268 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:48.415157 1481646 retry.go:31] will retry after 2.122984818s: waiting for machine to come up
	I1225 13:19:50.540077 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:50.540631 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:50.540688 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:50.540545 1481646 retry.go:31] will retry after 2.322817741s: waiting for machine to come up
	I1225 13:19:52.865518 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:52.865542 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:52.865558 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:52.865384 1481646 retry.go:31] will retry after 2.920418946s: waiting for machine to come up
	I1225 13:19:55.787190 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:19:55.787818 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:19:55.787855 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:19:55.787756 1481646 retry.go:31] will retry after 4.311674053s: waiting for machine to come up
	I1225 13:20:00.104523 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:00.105111 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:20:00.105136 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:20:00.105066 1481646 retry.go:31] will retry after 4.671552344s: waiting for machine to come up
	I1225 13:20:04.781905 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:04.782470 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:20:04.782506 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:20:04.782386 1481646 retry.go:31] will retry after 4.879061803s: waiting for machine to come up
	I1225 13:20:09.665053 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:09.665546 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | unable to find current IP address of domain stopped-upgrade-176938 in network minikube-net
	I1225 13:20:09.665572 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | I1225 13:20:09.665507 1481646 retry.go:31] will retry after 5.461596618s: waiting for machine to come up
	I1225 13:20:15.131851 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.132412 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Found IP for machine: 192.168.61.5
	I1225 13:20:15.132440 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Reserving static IP address...
	I1225 13:20:15.132459 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has current primary IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.132962 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Reserved static IP address: 192.168.61.5
	I1225 13:20:15.132986 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "stopped-upgrade-176938", mac: "52:54:00:d8:e9:e3", ip: "192.168.61.5"} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:15.133000 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Waiting for SSH to be available...
	I1225 13:20:15.133037 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-176938", mac: "52:54:00:d8:e9:e3", ip: "192.168.61.5"}
	I1225 13:20:15.133049 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | Getting to WaitForSSH function...
	I1225 13:20:15.135473 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.135848 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:15.135868 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.136089 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | Using SSH client type: external
	I1225 13:20:15.136136 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/stopped-upgrade-176938/id_rsa (-rw-------)
	I1225 13:20:15.136165 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/stopped-upgrade-176938/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:20:15.136173 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | About to run SSH command:
	I1225 13:20:15.136187 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | exit 0
	I1225 13:20:15.262488 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | SSH cmd err, output: <nil>: 
	I1225 13:20:15.262928 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetConfigRaw
	I1225 13:20:15.263597 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetIP
	I1225 13:20:15.266784 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.267217 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:15.267260 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.267508 1481343 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/stopped-upgrade-176938/config.json ...
	I1225 13:20:15.267738 1481343 machine.go:88] provisioning docker machine ...
	I1225 13:20:15.267765 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .DriverName
	I1225 13:20:15.268002 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetMachineName
	I1225 13:20:15.268170 1481343 buildroot.go:166] provisioning hostname "stopped-upgrade-176938"
	I1225 13:20:15.268186 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetMachineName
	I1225 13:20:15.268386 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHHostname
	I1225 13:20:15.271029 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.271428 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:15.271463 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.271658 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHPort
	I1225 13:20:15.271862 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHKeyPath
	I1225 13:20:15.272018 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHKeyPath
	I1225 13:20:15.272198 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHUsername
	I1225 13:20:15.272367 1481343 main.go:141] libmachine: Using SSH client type: native
	I1225 13:20:15.272765 1481343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.5 22 <nil> <nil>}
	I1225 13:20:15.272786 1481343 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-176938 && echo "stopped-upgrade-176938" | sudo tee /etc/hostname
	I1225 13:20:15.393709 1481343 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-176938
	
	I1225 13:20:15.393742 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHHostname
	I1225 13:20:15.396942 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.397334 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:15.397364 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.397624 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHPort
	I1225 13:20:15.397882 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHKeyPath
	I1225 13:20:15.398071 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHKeyPath
	I1225 13:20:15.398274 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHUsername
	I1225 13:20:15.398490 1481343 main.go:141] libmachine: Using SSH client type: native
	I1225 13:20:15.398851 1481343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.5 22 <nil> <nil>}
	I1225 13:20:15.398873 1481343 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-176938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-176938/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-176938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:20:15.514951 1481343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:20:15.514990 1481343 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:20:15.515013 1481343 buildroot.go:174] setting up certificates
	I1225 13:20:15.515025 1481343 provision.go:83] configureAuth start
	I1225 13:20:15.515034 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetMachineName
	I1225 13:20:15.515347 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetIP
	I1225 13:20:15.518548 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.519067 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:15.519103 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.519331 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHHostname
	I1225 13:20:15.521822 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.522245 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:15.522271 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.522503 1481343 provision.go:138] copyHostCerts
	I1225 13:20:15.522586 1481343 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:20:15.522606 1481343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:20:15.522693 1481343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:20:15.522813 1481343 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:20:15.522825 1481343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:20:15.522864 1481343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:20:15.522963 1481343 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:20:15.522974 1481343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:20:15.523009 1481343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:20:15.523086 1481343 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-176938 san=[192.168.61.5 192.168.61.5 localhost 127.0.0.1 minikube stopped-upgrade-176938]
	I1225 13:20:15.706027 1481343 provision.go:172] copyRemoteCerts
	I1225 13:20:15.706090 1481343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:20:15.706117 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHHostname
	I1225 13:20:15.709286 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.709701 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:15.709734 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.709961 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHPort
	I1225 13:20:15.710155 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHKeyPath
	I1225 13:20:15.710366 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHUsername
	I1225 13:20:15.710527 1481343 sshutil.go:53] new ssh client: &{IP:192.168.61.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/stopped-upgrade-176938/id_rsa Username:docker}
	I1225 13:20:15.793486 1481343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:20:15.807748 1481343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1225 13:20:15.821521 1481343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 13:20:15.835904 1481343 provision.go:86] duration metric: configureAuth took 320.861314ms
	I1225 13:20:15.835953 1481343 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:20:15.836113 1481343 config.go:182] Loaded profile config "stopped-upgrade-176938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1225 13:20:15.836203 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHHostname
	I1225 13:20:15.839247 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.839669 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:15.839726 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:15.839880 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHPort
	I1225 13:20:15.840136 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHKeyPath
	I1225 13:20:15.840333 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHKeyPath
	I1225 13:20:15.840477 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHUsername
	I1225 13:20:15.840613 1481343 main.go:141] libmachine: Using SSH client type: native
	I1225 13:20:15.840939 1481343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.5 22 <nil> <nil>}
	I1225 13:20:15.840961 1481343 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:20:20.471451 1481343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:20:20.471479 1481343 machine.go:91] provisioned docker machine in 5.203727613s
	I1225 13:20:20.471502 1481343 start.go:300] post-start starting for "stopped-upgrade-176938" (driver="kvm2")
	I1225 13:20:20.471513 1481343 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:20:20.471546 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .DriverName
	I1225 13:20:20.471911 1481343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:20:20.471952 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHHostname
	I1225 13:20:20.475395 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:20.475857 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:20.475878 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:20.476032 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHPort
	I1225 13:20:20.476300 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHKeyPath
	I1225 13:20:20.476614 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHUsername
	I1225 13:20:20.476869 1481343 sshutil.go:53] new ssh client: &{IP:192.168.61.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/stopped-upgrade-176938/id_rsa Username:docker}
	I1225 13:20:20.557329 1481343 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:20:20.561338 1481343 info.go:137] Remote host: Buildroot 2019.02.7
	I1225 13:20:20.561363 1481343 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:20:20.561421 1481343 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:20:20.561487 1481343 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:20:20.561568 1481343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:20:20.566922 1481343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:20:20.581042 1481343 start.go:303] post-start completed in 109.520722ms
	I1225 13:20:20.581082 1481343 fix.go:56] fixHost completed within 40.029459959s
	I1225 13:20:20.581112 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHHostname
	I1225 13:20:20.583915 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:20.584319 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:20.584377 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:20.584438 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHPort
	I1225 13:20:20.584660 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHKeyPath
	I1225 13:20:20.584825 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHKeyPath
	I1225 13:20:20.584990 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHUsername
	I1225 13:20:20.585120 1481343 main.go:141] libmachine: Using SSH client type: native
	I1225 13:20:20.585510 1481343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.5 22 <nil> <nil>}
	I1225 13:20:20.585526 1481343 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1225 13:20:20.695133 1481343 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510420.643787188
	
	I1225 13:20:20.695162 1481343 fix.go:206] guest clock: 1703510420.643787188
	I1225 13:20:20.695173 1481343 fix.go:219] Guest: 2023-12-25 13:20:20.643787188 +0000 UTC Remote: 2023-12-25 13:20:20.581087327 +0000 UTC m=+80.447110717 (delta=62.699861ms)
	I1225 13:20:20.695203 1481343 fix.go:190] guest clock delta is within tolerance: 62.699861ms
	I1225 13:20:20.695210 1481343 start.go:83] releasing machines lock for "stopped-upgrade-176938", held for 40.14362555s
	I1225 13:20:20.695247 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .DriverName
	I1225 13:20:20.695589 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetIP
	I1225 13:20:20.698897 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:20.699385 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:20.699416 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:20.699633 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .DriverName
	I1225 13:20:20.700305 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .DriverName
	I1225 13:20:20.700528 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .DriverName
	I1225 13:20:20.700638 1481343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:20:20.700701 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHHostname
	I1225 13:20:20.700794 1481343 ssh_runner.go:195] Run: cat /version.json
	I1225 13:20:20.700812 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHHostname
	I1225 13:20:20.703492 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:20.703697 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:20.703972 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:20.703999 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:20.704141 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHPort
	I1225 13:20:20.704272 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e9:e3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-12-25 14:20:06 +0000 UTC Type:0 Mac:52:54:00:d8:e9:e3 Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:stopped-upgrade-176938 Clientid:01:52:54:00:d8:e9:e3}
	I1225 13:20:20.704294 1481343 main.go:141] libmachine: (stopped-upgrade-176938) DBG | domain stopped-upgrade-176938 has defined IP address 192.168.61.5 and MAC address 52:54:00:d8:e9:e3 in network minikube-net
	I1225 13:20:20.704331 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHKeyPath
	I1225 13:20:20.704457 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHPort
	I1225 13:20:20.704522 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHUsername
	I1225 13:20:20.704640 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHKeyPath
	I1225 13:20:20.704829 1481343 main.go:141] libmachine: (stopped-upgrade-176938) Calling .GetSSHUsername
	I1225 13:20:20.704822 1481343 sshutil.go:53] new ssh client: &{IP:192.168.61.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/stopped-upgrade-176938/id_rsa Username:docker}
	I1225 13:20:20.705001 1481343 sshutil.go:53] new ssh client: &{IP:192.168.61.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/stopped-upgrade-176938/id_rsa Username:docker}
	W1225 13:20:20.817804 1481343 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1225 13:20:20.817926 1481343 ssh_runner.go:195] Run: systemctl --version
	I1225 13:20:20.827155 1481343 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:20:21.028226 1481343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:20:21.036978 1481343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:20:21.037079 1481343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:20:21.043231 1481343 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 13:20:21.043263 1481343 start.go:475] detecting cgroup driver to use...
	I1225 13:20:21.043328 1481343 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:20:21.055760 1481343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:20:21.066202 1481343 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:20:21.066277 1481343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:20:21.076692 1481343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:20:21.087094 1481343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1225 13:20:21.096834 1481343 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1225 13:20:21.096898 1481343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:20:21.195009 1481343 docker.go:219] disabling docker service ...
	I1225 13:20:21.195107 1481343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:20:21.208257 1481343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:20:21.217785 1481343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:20:21.298307 1481343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:20:21.390731 1481343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:20:21.400944 1481343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:20:21.413656 1481343 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1225 13:20:21.413726 1481343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:20:21.423023 1481343 out.go:177] 
	W1225 13:20:21.424555 1481343 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1225 13:20:21.424574 1481343 out.go:239] * 
	* 
	W1225 13:20:21.425572 1481343 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 13:20:21.427888 1481343 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-176938 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (306.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-198979 --alsologtostderr -v=3
E1225 13:18:50.398840 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 13:18:56.706109 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-198979 --alsologtostderr -v=3: exit status 82 (2m2.202093479s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-198979"  ...
	* Stopping node "old-k8s-version-198979"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 13:18:22.314815 1481173 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:18:22.315102 1481173 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:18:22.315149 1481173 out.go:309] Setting ErrFile to fd 2...
	I1225 13:18:22.315168 1481173 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:18:22.315599 1481173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:18:22.316468 1481173 out.go:303] Setting JSON to false
	I1225 13:18:22.316646 1481173 mustload.go:65] Loading cluster: old-k8s-version-198979
	I1225 13:18:22.317027 1481173 config.go:182] Loaded profile config "old-k8s-version-198979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1225 13:18:22.317185 1481173 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/config.json ...
	I1225 13:18:22.317470 1481173 mustload.go:65] Loading cluster: old-k8s-version-198979
	I1225 13:18:22.317667 1481173 config.go:182] Loaded profile config "old-k8s-version-198979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1225 13:18:22.317746 1481173 stop.go:39] StopHost: old-k8s-version-198979
	I1225 13:18:22.318500 1481173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:18:22.318583 1481173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:18:22.335472 1481173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
	I1225 13:18:22.336089 1481173 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:18:22.336869 1481173 main.go:141] libmachine: Using API Version  1
	I1225 13:18:22.336899 1481173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:18:22.337289 1481173 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:18:22.339963 1481173 out.go:177] * Stopping node "old-k8s-version-198979"  ...
	I1225 13:18:22.341756 1481173 main.go:141] libmachine: Stopping "old-k8s-version-198979"...
	I1225 13:18:22.341796 1481173 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:18:22.343659 1481173 main.go:141] libmachine: (old-k8s-version-198979) Calling .Stop
	I1225 13:18:22.347236 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 0/60
	I1225 13:18:23.349156 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 1/60
	I1225 13:18:24.350652 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 2/60
	I1225 13:18:25.353087 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 3/60
	I1225 13:18:26.354543 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 4/60
	I1225 13:18:27.356538 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 5/60
	I1225 13:18:28.358053 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 6/60
	I1225 13:18:29.359652 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 7/60
	I1225 13:18:30.361036 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 8/60
	I1225 13:18:31.362354 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 9/60
	I1225 13:18:32.364080 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 10/60
	I1225 13:18:33.365809 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 11/60
	I1225 13:18:34.367350 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 12/60
	I1225 13:18:35.369159 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 13/60
	I1225 13:18:36.371184 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 14/60
	I1225 13:18:37.372690 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 15/60
	I1225 13:18:38.374987 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 16/60
	I1225 13:18:39.376862 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 17/60
	I1225 13:18:40.378664 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 18/60
	I1225 13:18:41.380214 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 19/60
	I1225 13:18:42.382709 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 20/60
	I1225 13:18:43.385008 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 21/60
	I1225 13:18:44.386524 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 22/60
	I1225 13:18:45.389162 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 23/60
	I1225 13:18:46.390766 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 24/60
	I1225 13:18:47.393104 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 25/60
	I1225 13:18:48.394485 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 26/60
	I1225 13:18:49.396132 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 27/60
	I1225 13:18:50.397583 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 28/60
	I1225 13:18:51.399203 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 29/60
	I1225 13:18:52.401456 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 30/60
	I1225 13:18:53.403302 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 31/60
	I1225 13:18:54.405050 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 32/60
	I1225 13:18:55.406607 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 33/60
	I1225 13:18:56.408420 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 34/60
	I1225 13:18:57.410247 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 35/60
	I1225 13:18:58.411698 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 36/60
	I1225 13:18:59.414717 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 37/60
	I1225 13:19:00.417426 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 38/60
	I1225 13:19:01.419014 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 39/60
	I1225 13:19:02.421166 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 40/60
	I1225 13:19:03.423075 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 41/60
	I1225 13:19:04.425443 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 42/60
	I1225 13:19:05.427225 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 43/60
	I1225 13:19:06.429129 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 44/60
	I1225 13:19:07.431296 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 45/60
	I1225 13:19:08.433277 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 46/60
	I1225 13:19:09.434787 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 47/60
	I1225 13:19:10.437431 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 48/60
	I1225 13:19:11.438994 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 49/60
	I1225 13:19:12.441251 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 50/60
	I1225 13:19:13.442859 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 51/60
	I1225 13:19:14.445249 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 52/60
	I1225 13:19:15.447638 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 53/60
	I1225 13:19:16.449219 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 54/60
	I1225 13:19:17.451102 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 55/60
	I1225 13:19:18.452667 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 56/60
	I1225 13:19:19.454523 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 57/60
	I1225 13:19:20.456380 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 58/60
	I1225 13:19:21.457931 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 59/60
	I1225 13:19:22.459228 1481173 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1225 13:19:22.459300 1481173 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:19:22.459321 1481173 retry.go:31] will retry after 912.59727ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:19:23.372363 1481173 stop.go:39] StopHost: old-k8s-version-198979
	I1225 13:19:23.372722 1481173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:19:23.372768 1481173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:19:23.388633 1481173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
	I1225 13:19:23.389105 1481173 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:19:23.389631 1481173 main.go:141] libmachine: Using API Version  1
	I1225 13:19:23.389661 1481173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:19:23.390125 1481173 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:19:23.391869 1481173 out.go:177] * Stopping node "old-k8s-version-198979"  ...
	I1225 13:19:23.393372 1481173 main.go:141] libmachine: Stopping "old-k8s-version-198979"...
	I1225 13:19:23.393393 1481173 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:19:23.395497 1481173 main.go:141] libmachine: (old-k8s-version-198979) Calling .Stop
	I1225 13:19:23.399697 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 0/60
	I1225 13:19:24.401788 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 1/60
	I1225 13:19:25.403515 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 2/60
	I1225 13:19:26.405097 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 3/60
	I1225 13:19:27.406772 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 4/60
	I1225 13:19:28.408749 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 5/60
	I1225 13:19:29.410328 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 6/60
	I1225 13:19:30.411947 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 7/60
	I1225 13:19:31.413783 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 8/60
	I1225 13:19:32.415152 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 9/60
	I1225 13:19:33.417369 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 10/60
	I1225 13:19:34.419039 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 11/60
	I1225 13:19:35.421080 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 12/60
	I1225 13:19:36.422642 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 13/60
	I1225 13:19:37.424047 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 14/60
	I1225 13:19:38.425509 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 15/60
	I1225 13:19:39.427513 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 16/60
	I1225 13:19:40.429419 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 17/60
	I1225 13:19:41.431500 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 18/60
	I1225 13:19:42.433337 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 19/60
	I1225 13:19:43.435169 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 20/60
	I1225 13:19:44.436724 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 21/60
	I1225 13:19:45.438772 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 22/60
	I1225 13:19:46.441123 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 23/60
	I1225 13:19:47.442692 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 24/60
	I1225 13:19:48.444602 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 25/60
	I1225 13:19:49.446094 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 26/60
	I1225 13:19:50.447978 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 27/60
	I1225 13:19:51.449852 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 28/60
	I1225 13:19:52.451376 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 29/60
	I1225 13:19:53.453313 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 30/60
	I1225 13:19:54.455119 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 31/60
	I1225 13:19:55.457038 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 32/60
	I1225 13:19:56.458768 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 33/60
	I1225 13:19:57.460348 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 34/60
	I1225 13:19:58.462706 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 35/60
	I1225 13:19:59.464350 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 36/60
	I1225 13:20:00.466351 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 37/60
	I1225 13:20:01.468052 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 38/60
	I1225 13:20:02.469542 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 39/60
	I1225 13:20:03.471565 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 40/60
	I1225 13:20:04.473305 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 41/60
	I1225 13:20:05.474830 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 42/60
	I1225 13:20:06.477083 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 43/60
	I1225 13:20:07.478708 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 44/60
	I1225 13:20:08.480481 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 45/60
	I1225 13:20:09.482083 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 46/60
	I1225 13:20:10.483589 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 47/60
	I1225 13:20:11.485187 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 48/60
	I1225 13:20:12.486864 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 49/60
	I1225 13:20:13.488458 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 50/60
	I1225 13:20:14.490266 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 51/60
	I1225 13:20:15.492116 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 52/60
	I1225 13:20:16.493974 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 53/60
	I1225 13:20:17.495703 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 54/60
	I1225 13:20:18.497978 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 55/60
	I1225 13:20:19.499968 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 56/60
	I1225 13:20:20.501387 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 57/60
	I1225 13:20:21.503364 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 58/60
	I1225 13:20:23.415611 1481173 main.go:141] libmachine: (old-k8s-version-198979) Waiting for machine to stop 59/60
	I1225 13:20:24.416247 1481173 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1225 13:20:24.416301 1481173 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:20:24.418387 1481173 out.go:177] 
	W1225 13:20:24.419961 1481173 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1225 13:20:24.419986 1481173 out.go:239] * 
	* 
	W1225 13:20:24.432555 1481173 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 13:20:24.434105 1481173 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-198979 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-198979 -n old-k8s-version-198979
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-198979 -n old-k8s-version-198979: exit status 3 (18.490545379s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:20:42.926783 1482281 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host
	E1225 13:20:42.926803 1482281 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-198979" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-330063 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-330063 --alsologtostderr -v=3: exit status 82 (2m2.40884894s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-330063"  ...
	* Stopping node "no-preload-330063"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 13:19:31.732585 1481580 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:19:31.732882 1481580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:19:31.732892 1481580 out.go:309] Setting ErrFile to fd 2...
	I1225 13:19:31.732897 1481580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:19:31.733099 1481580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:19:31.733398 1481580 out.go:303] Setting JSON to false
	I1225 13:19:31.733483 1481580 mustload.go:65] Loading cluster: no-preload-330063
	I1225 13:19:31.733818 1481580 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:19:31.733889 1481580 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/config.json ...
	I1225 13:19:31.734061 1481580 mustload.go:65] Loading cluster: no-preload-330063
	I1225 13:19:31.734169 1481580 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:19:31.734195 1481580 stop.go:39] StopHost: no-preload-330063
	I1225 13:19:31.734584 1481580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:19:31.734650 1481580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:19:31.749876 1481580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45347
	I1225 13:19:31.750430 1481580 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:19:31.751088 1481580 main.go:141] libmachine: Using API Version  1
	I1225 13:19:31.751117 1481580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:19:31.751534 1481580 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:19:31.755260 1481580 out.go:177] * Stopping node "no-preload-330063"  ...
	I1225 13:19:31.756934 1481580 main.go:141] libmachine: Stopping "no-preload-330063"...
	I1225 13:19:31.756958 1481580 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:19:31.759047 1481580 main.go:141] libmachine: (no-preload-330063) Calling .Stop
	I1225 13:19:31.763161 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 0/60
	I1225 13:19:32.764715 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 1/60
	I1225 13:19:33.765980 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 2/60
	I1225 13:19:34.767547 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 3/60
	I1225 13:19:35.769110 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 4/60
	I1225 13:19:36.771453 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 5/60
	I1225 13:19:37.773101 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 6/60
	I1225 13:19:38.774643 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 7/60
	I1225 13:19:39.776163 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 8/60
	I1225 13:19:40.777742 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 9/60
	I1225 13:19:41.779660 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 10/60
	I1225 13:19:42.781345 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 11/60
	I1225 13:19:43.782902 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 12/60
	I1225 13:19:44.784867 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 13/60
	I1225 13:19:45.786274 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 14/60
	I1225 13:19:46.788615 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 15/60
	I1225 13:19:47.790511 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 16/60
	I1225 13:19:48.792044 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 17/60
	I1225 13:19:49.793848 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 18/60
	I1225 13:19:50.795397 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 19/60
	I1225 13:19:51.797822 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 20/60
	I1225 13:19:52.800330 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 21/60
	I1225 13:19:53.802185 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 22/60
	I1225 13:19:54.808506 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 23/60
	I1225 13:19:55.810501 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 24/60
	I1225 13:19:56.812087 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 25/60
	I1225 13:19:57.813550 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 26/60
	I1225 13:19:58.815506 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 27/60
	I1225 13:19:59.817032 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 28/60
	I1225 13:20:00.818487 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 29/60
	I1225 13:20:01.820725 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 30/60
	I1225 13:20:02.822153 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 31/60
	I1225 13:20:03.823900 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 32/60
	I1225 13:20:04.825442 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 33/60
	I1225 13:20:05.827058 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 34/60
	I1225 13:20:06.828959 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 35/60
	I1225 13:20:07.830476 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 36/60
	I1225 13:20:08.831834 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 37/60
	I1225 13:20:09.833576 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 38/60
	I1225 13:20:10.834988 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 39/60
	I1225 13:20:11.837630 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 40/60
	I1225 13:20:12.839466 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 41/60
	I1225 13:20:13.841091 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 42/60
	I1225 13:20:14.842837 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 43/60
	I1225 13:20:15.844855 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 44/60
	I1225 13:20:16.847248 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 45/60
	I1225 13:20:17.848929 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 46/60
	I1225 13:20:18.850501 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 47/60
	I1225 13:20:19.852335 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 48/60
	I1225 13:20:20.853986 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 49/60
	I1225 13:20:21.856373 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 50/60
	I1225 13:20:23.415737 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 51/60
	I1225 13:20:24.417323 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 52/60
	I1225 13:20:25.419000 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 53/60
	I1225 13:20:26.420677 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 54/60
	I1225 13:20:27.423169 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 55/60
	I1225 13:20:28.425099 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 56/60
	I1225 13:20:29.426701 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 57/60
	I1225 13:20:30.428338 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 58/60
	I1225 13:20:31.430261 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 59/60
	I1225 13:20:32.430935 1481580 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1225 13:20:32.431005 1481580 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:20:32.431028 1481580 retry.go:31] will retry after 1.478857834s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:20:33.910701 1481580 stop.go:39] StopHost: no-preload-330063
	I1225 13:20:33.911216 1481580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:20:33.911298 1481580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:20:33.927067 1481580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40681
	I1225 13:20:33.927561 1481580 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:20:33.928179 1481580 main.go:141] libmachine: Using API Version  1
	I1225 13:20:33.928218 1481580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:20:33.928649 1481580 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:20:33.931041 1481580 out.go:177] * Stopping node "no-preload-330063"  ...
	I1225 13:20:33.932803 1481580 main.go:141] libmachine: Stopping "no-preload-330063"...
	I1225 13:20:33.932837 1481580 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:20:33.934875 1481580 main.go:141] libmachine: (no-preload-330063) Calling .Stop
	I1225 13:20:33.938672 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 0/60
	I1225 13:20:34.941080 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 1/60
	I1225 13:20:35.942707 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 2/60
	I1225 13:20:36.944330 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 3/60
	I1225 13:20:37.945694 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 4/60
	I1225 13:20:38.947807 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 5/60
	I1225 13:20:39.949319 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 6/60
	I1225 13:20:40.950884 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 7/60
	I1225 13:20:41.952318 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 8/60
	I1225 13:20:42.953600 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 9/60
	I1225 13:20:43.955920 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 10/60
	I1225 13:20:44.957542 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 11/60
	I1225 13:20:45.959043 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 12/60
	I1225 13:20:46.966296 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 13/60
	I1225 13:20:47.968648 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 14/60
	I1225 13:20:48.970571 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 15/60
	I1225 13:20:49.972305 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 16/60
	I1225 13:20:50.974204 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 17/60
	I1225 13:20:51.975663 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 18/60
	I1225 13:20:52.977341 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 19/60
	I1225 13:20:53.979913 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 20/60
	I1225 13:20:54.981764 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 21/60
	I1225 13:20:55.983353 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 22/60
	I1225 13:20:56.985012 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 23/60
	I1225 13:20:57.987458 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 24/60
	I1225 13:20:58.989725 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 25/60
	I1225 13:20:59.991957 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 26/60
	I1225 13:21:00.994127 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 27/60
	I1225 13:21:01.995601 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 28/60
	I1225 13:21:02.997384 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 29/60
	I1225 13:21:03.999171 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 30/60
	I1225 13:21:05.001267 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 31/60
	I1225 13:21:06.002813 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 32/60
	I1225 13:21:07.004615 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 33/60
	I1225 13:21:08.006126 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 34/60
	I1225 13:21:09.008286 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 35/60
	I1225 13:21:10.009815 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 36/60
	I1225 13:21:11.012584 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 37/60
	I1225 13:21:12.014245 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 38/60
	I1225 13:21:13.015867 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 39/60
	I1225 13:21:14.017505 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 40/60
	I1225 13:21:15.019287 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 41/60
	I1225 13:21:16.021111 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 42/60
	I1225 13:21:17.024067 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 43/60
	I1225 13:21:18.025668 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 44/60
	I1225 13:21:19.028293 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 45/60
	I1225 13:21:20.029820 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 46/60
	I1225 13:21:21.031483 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 47/60
	I1225 13:21:22.033071 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 48/60
	I1225 13:21:23.034864 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 49/60
	I1225 13:21:24.036869 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 50/60
	I1225 13:21:25.038850 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 51/60
	I1225 13:21:26.041031 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 52/60
	I1225 13:21:27.043068 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 53/60
	I1225 13:21:28.045096 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 54/60
	I1225 13:21:29.047303 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 55/60
	I1225 13:21:30.049118 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 56/60
	I1225 13:21:31.050901 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 57/60
	I1225 13:21:32.053427 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 58/60
	I1225 13:21:33.054902 1481580 main.go:141] libmachine: (no-preload-330063) Waiting for machine to stop 59/60
	I1225 13:21:34.055944 1481580 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1225 13:21:34.055992 1481580 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:21:34.058204 1481580 out.go:177] 
	W1225 13:21:34.059585 1481580 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1225 13:21:34.059611 1481580 out.go:239] * 
	* 
	W1225 13:21:34.072314 1481580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 13:21:34.074035 1481580 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-330063 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-330063 -n no-preload-330063
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-330063 -n no-preload-330063: exit status 3 (18.482140017s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:21:52.558894 1482886 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.232:22: connect: no route to host
	E1225 13:21:52.558917 1482886 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.232:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-330063" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-198979 -n old-k8s-version-198979
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-198979 -n old-k8s-version-198979: exit status 3 (3.203949936s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:20:46.130746 1482390 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host
	E1225 13:20:46.130773 1482390 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-198979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-198979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.160754811s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-198979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-198979 -n old-k8s-version-198979
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-198979 -n old-k8s-version-198979: exit status 3 (3.050472763s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:20:55.342966 1482588 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host
	E1225 13:20:55.342991 1482588 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-198979" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-330063 -n no-preload-330063
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-330063 -n no-preload-330063: exit status 3 (3.200025839s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:21:55.758911 1482990 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.232:22: connect: no route to host
	E1225 13:21:55.758941 1482990 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.232:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-330063 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-330063 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.161889021s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.232:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-330063 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-330063 -n no-preload-330063
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-330063 -n no-preload-330063: exit status 3 (3.054622214s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:22:04.974893 1483060 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.232:22: connect: no route to host
	E1225 13:22:04.974940 1483060 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.232:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-330063" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-880612 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-880612 --alsologtostderr -v=3: exit status 82 (2m1.202878679s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-880612"  ...
	* Stopping node "embed-certs-880612"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 13:22:14.359318 1483224 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:22:14.359595 1483224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:22:14.359605 1483224 out.go:309] Setting ErrFile to fd 2...
	I1225 13:22:14.359609 1483224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:22:14.359812 1483224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:22:14.360064 1483224 out.go:303] Setting JSON to false
	I1225 13:22:14.360148 1483224 mustload.go:65] Loading cluster: embed-certs-880612
	I1225 13:22:14.360524 1483224 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:22:14.360594 1483224 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/config.json ...
	I1225 13:22:14.360769 1483224 mustload.go:65] Loading cluster: embed-certs-880612
	I1225 13:22:14.360880 1483224 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:22:14.360908 1483224 stop.go:39] StopHost: embed-certs-880612
	I1225 13:22:14.361312 1483224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:22:14.361385 1483224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:22:14.376018 1483224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46291
	I1225 13:22:14.376600 1483224 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:22:14.377254 1483224 main.go:141] libmachine: Using API Version  1
	I1225 13:22:14.377285 1483224 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:22:14.377649 1483224 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:22:14.380352 1483224 out.go:177] * Stopping node "embed-certs-880612"  ...
	I1225 13:22:14.381806 1483224 main.go:141] libmachine: Stopping "embed-certs-880612"...
	I1225 13:22:14.381826 1483224 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:22:14.383871 1483224 main.go:141] libmachine: (embed-certs-880612) Calling .Stop
	I1225 13:22:14.387306 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 0/60
	I1225 13:22:15.389599 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 1/60
	I1225 13:22:16.390983 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 2/60
	I1225 13:22:17.393249 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 3/60
	I1225 13:22:18.394924 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 4/60
	I1225 13:22:19.397213 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 5/60
	I1225 13:22:20.398825 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 6/60
	I1225 13:22:21.400261 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 7/60
	I1225 13:22:22.401891 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 8/60
	I1225 13:22:23.403568 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 9/60
	I1225 13:22:24.404855 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 10/60
	I1225 13:22:25.406422 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 11/60
	I1225 13:22:26.408083 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 12/60
	I1225 13:22:27.409574 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 13/60
	I1225 13:22:28.410827 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 14/60
	I1225 13:22:29.412953 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 15/60
	I1225 13:22:30.414706 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 16/60
	I1225 13:22:31.417214 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 17/60
	I1225 13:22:32.418747 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 18/60
	I1225 13:22:33.420291 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 19/60
	I1225 13:22:34.423029 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 20/60
	I1225 13:22:35.424934 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 21/60
	I1225 13:22:36.426759 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 22/60
	I1225 13:22:37.428586 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 23/60
	I1225 13:22:38.430198 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 24/60
	I1225 13:22:39.432387 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 25/60
	I1225 13:22:40.433876 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 26/60
	I1225 13:22:41.435386 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 27/60
	I1225 13:22:42.437114 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 28/60
	I1225 13:22:43.438564 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 29/60
	I1225 13:22:44.440934 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 30/60
	I1225 13:22:45.442397 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 31/60
	I1225 13:22:46.443838 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 32/60
	I1225 13:22:47.445201 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 33/60
	I1225 13:22:48.446878 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 34/60
	I1225 13:22:49.449103 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 35/60
	I1225 13:22:50.450852 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 36/60
	I1225 13:22:51.452305 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 37/60
	I1225 13:22:52.454022 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 38/60
	I1225 13:22:53.455593 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 39/60
	I1225 13:22:54.457234 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 40/60
	I1225 13:22:55.458769 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 41/60
	I1225 13:22:56.460401 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 42/60
	I1225 13:22:57.462133 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 43/60
	I1225 13:22:58.463617 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 44/60
	I1225 13:22:59.465903 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 45/60
	I1225 13:23:00.467502 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 46/60
	I1225 13:23:01.469182 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 47/60
	I1225 13:23:02.470810 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 48/60
	I1225 13:23:03.472425 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 49/60
	I1225 13:23:04.474896 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 50/60
	I1225 13:23:05.476706 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 51/60
	I1225 13:23:06.478377 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 52/60
	I1225 13:23:07.480152 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 53/60
	I1225 13:23:08.481810 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 54/60
	I1225 13:23:09.484248 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 55/60
	I1225 13:23:10.486005 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 56/60
	I1225 13:23:11.487618 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 57/60
	I1225 13:23:12.489305 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 58/60
	I1225 13:23:13.491158 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 59/60
	I1225 13:23:14.491776 1483224 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1225 13:23:14.491845 1483224 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:23:14.491869 1483224 retry.go:31] will retry after 861.756362ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:23:15.353927 1483224 stop.go:39] StopHost: embed-certs-880612
	I1225 13:23:15.354404 1483224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:23:15.354489 1483224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:23:15.370222 1483224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38215
	I1225 13:23:15.370713 1483224 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:23:15.371290 1483224 main.go:141] libmachine: Using API Version  1
	I1225 13:23:15.371339 1483224 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:23:15.371687 1483224 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:23:15.373905 1483224 out.go:177] * Stopping node "embed-certs-880612"  ...
	I1225 13:23:15.375380 1483224 main.go:141] libmachine: Stopping "embed-certs-880612"...
	I1225 13:23:15.375400 1483224 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:23:15.377039 1483224 main.go:141] libmachine: (embed-certs-880612) Calling .Stop
	I1225 13:23:15.380485 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 0/60
	I1225 13:23:16.382079 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 1/60
	I1225 13:23:17.383512 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 2/60
	I1225 13:23:18.384894 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 3/60
	I1225 13:23:19.386383 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 4/60
	I1225 13:23:20.388023 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 5/60
	I1225 13:23:21.389459 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 6/60
	I1225 13:23:22.391198 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 7/60
	I1225 13:23:23.392994 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 8/60
	I1225 13:23:24.394266 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 9/60
	I1225 13:23:25.396456 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 10/60
	I1225 13:23:26.397819 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 11/60
	I1225 13:23:27.399632 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 12/60
	I1225 13:23:28.401072 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 13/60
	I1225 13:23:29.402612 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 14/60
	I1225 13:23:30.404697 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 15/60
	I1225 13:23:31.406518 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 16/60
	I1225 13:23:32.408054 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 17/60
	I1225 13:23:33.409513 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 18/60
	I1225 13:23:34.411158 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 19/60
	I1225 13:23:35.412718 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 20/60
	I1225 13:23:36.414220 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 21/60
	I1225 13:23:37.415783 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 22/60
	I1225 13:23:38.417067 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 23/60
	I1225 13:23:39.418664 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 24/60
	I1225 13:23:40.420631 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 25/60
	I1225 13:23:41.422098 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 26/60
	I1225 13:23:42.423712 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 27/60
	I1225 13:23:43.425195 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 28/60
	I1225 13:23:44.426895 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 29/60
	I1225 13:23:45.429067 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 30/60
	I1225 13:23:46.430669 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 31/60
	I1225 13:23:47.432297 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 32/60
	I1225 13:23:48.433897 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 33/60
	I1225 13:23:49.435490 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 34/60
	I1225 13:23:50.437556 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 35/60
	I1225 13:23:51.438955 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 36/60
	I1225 13:23:52.440444 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 37/60
	I1225 13:23:53.442061 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 38/60
	I1225 13:23:54.443694 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 39/60
	I1225 13:23:55.445818 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 40/60
	I1225 13:23:56.447289 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 41/60
	I1225 13:23:57.448756 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 42/60
	I1225 13:23:58.450471 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 43/60
	I1225 13:23:59.452167 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 44/60
	I1225 13:24:00.454189 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 45/60
	I1225 13:24:01.455847 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 46/60
	I1225 13:24:02.457574 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 47/60
	I1225 13:24:03.459115 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 48/60
	I1225 13:24:04.460699 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 49/60
	I1225 13:24:05.462750 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 50/60
	I1225 13:24:06.464573 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 51/60
	I1225 13:24:07.466306 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 52/60
	I1225 13:24:08.467907 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 53/60
	I1225 13:24:09.470017 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 54/60
	I1225 13:24:10.471887 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 55/60
	I1225 13:24:11.473451 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 56/60
	I1225 13:24:12.475204 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 57/60
	I1225 13:24:13.476867 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 58/60
	I1225 13:24:14.478610 1483224 main.go:141] libmachine: (embed-certs-880612) Waiting for machine to stop 59/60
	I1225 13:24:15.479678 1483224 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1225 13:24:15.479740 1483224 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:24:15.481784 1483224 out.go:177] 
	W1225 13:24:15.483173 1483224 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1225 13:24:15.483197 1483224 out.go:239] * 
	* 
	W1225 13:24:15.496095 1483224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 13:24:15.497727 1483224 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-880612 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880612 -n embed-certs-880612
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880612 -n embed-certs-880612: exit status 3 (18.595522195s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:24:34.094818 1483730 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.179:22: connect: no route to host
	E1225 13:24:34.094868 1483730 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.179:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-880612" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-344803 --alsologtostderr -v=3
E1225 13:23:39.759397 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 13:23:56.706911 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 13:24:07.347675 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-344803 --alsologtostderr -v=3: exit status 82 (2m1.173174933s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-344803"  ...
	* Stopping node "default-k8s-diff-port-344803"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 13:22:37.827139 1483434 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:22:37.827448 1483434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:22:37.827460 1483434 out.go:309] Setting ErrFile to fd 2...
	I1225 13:22:37.827467 1483434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:22:37.827681 1483434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:22:37.828011 1483434 out.go:303] Setting JSON to false
	I1225 13:22:37.828109 1483434 mustload.go:65] Loading cluster: default-k8s-diff-port-344803
	I1225 13:22:37.828450 1483434 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:22:37.828532 1483434 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/config.json ...
	I1225 13:22:37.828756 1483434 mustload.go:65] Loading cluster: default-k8s-diff-port-344803
	I1225 13:22:37.828880 1483434 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:22:37.828916 1483434 stop.go:39] StopHost: default-k8s-diff-port-344803
	I1225 13:22:37.829489 1483434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:22:37.829544 1483434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:22:37.845106 1483434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37029
	I1225 13:22:37.845649 1483434 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:22:37.846285 1483434 main.go:141] libmachine: Using API Version  1
	I1225 13:22:37.846316 1483434 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:22:37.846695 1483434 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:22:37.849567 1483434 out.go:177] * Stopping node "default-k8s-diff-port-344803"  ...
	I1225 13:22:37.851408 1483434 main.go:141] libmachine: Stopping "default-k8s-diff-port-344803"...
	I1225 13:22:37.851431 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:22:37.853188 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Stop
	I1225 13:22:37.857407 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 0/60
	I1225 13:22:38.859135 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 1/60
	I1225 13:22:39.860603 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 2/60
	I1225 13:22:40.861901 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 3/60
	I1225 13:22:41.863638 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 4/60
	I1225 13:22:42.866029 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 5/60
	I1225 13:22:43.867595 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 6/60
	I1225 13:22:44.869119 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 7/60
	I1225 13:22:45.870845 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 8/60
	I1225 13:22:46.872386 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 9/60
	I1225 13:22:47.874126 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 10/60
	I1225 13:22:48.875599 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 11/60
	I1225 13:22:49.877175 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 12/60
	I1225 13:22:50.878946 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 13/60
	I1225 13:22:51.880422 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 14/60
	I1225 13:22:52.882622 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 15/60
	I1225 13:22:53.884298 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 16/60
	I1225 13:22:54.885828 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 17/60
	I1225 13:22:55.887548 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 18/60
	I1225 13:22:56.889221 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 19/60
	I1225 13:22:57.890794 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 20/60
	I1225 13:22:58.892413 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 21/60
	I1225 13:22:59.894190 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 22/60
	I1225 13:23:00.896014 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 23/60
	I1225 13:23:01.897549 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 24/60
	I1225 13:23:02.899885 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 25/60
	I1225 13:23:03.901355 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 26/60
	I1225 13:23:04.902906 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 27/60
	I1225 13:23:05.904474 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 28/60
	I1225 13:23:06.906504 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 29/60
	I1225 13:23:07.908052 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 30/60
	I1225 13:23:08.909767 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 31/60
	I1225 13:23:09.911254 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 32/60
	I1225 13:23:10.912735 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 33/60
	I1225 13:23:11.914221 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 34/60
	I1225 13:23:12.916262 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 35/60
	I1225 13:23:13.917976 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 36/60
	I1225 13:23:14.919656 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 37/60
	I1225 13:23:15.921126 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 38/60
	I1225 13:23:16.922779 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 39/60
	I1225 13:23:17.925065 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 40/60
	I1225 13:23:18.927091 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 41/60
	I1225 13:23:19.928595 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 42/60
	I1225 13:23:20.930184 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 43/60
	I1225 13:23:21.931935 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 44/60
	I1225 13:23:22.934357 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 45/60
	I1225 13:23:23.935963 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 46/60
	I1225 13:23:24.937406 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 47/60
	I1225 13:23:25.938967 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 48/60
	I1225 13:23:26.940583 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 49/60
	I1225 13:23:27.943194 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 50/60
	I1225 13:23:28.944755 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 51/60
	I1225 13:23:29.946401 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 52/60
	I1225 13:23:30.947841 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 53/60
	I1225 13:23:31.949474 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 54/60
	I1225 13:23:32.951769 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 55/60
	I1225 13:23:33.953321 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 56/60
	I1225 13:23:34.954930 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 57/60
	I1225 13:23:35.956893 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 58/60
	I1225 13:23:36.958634 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 59/60
	I1225 13:23:37.960168 1483434 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1225 13:23:37.960222 1483434 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:23:37.960245 1483434 retry.go:31] will retry after 819.498375ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:23:38.780256 1483434 stop.go:39] StopHost: default-k8s-diff-port-344803
	I1225 13:23:38.780770 1483434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:23:38.780828 1483434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:23:38.796709 1483434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1225 13:23:38.797175 1483434 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:23:38.797669 1483434 main.go:141] libmachine: Using API Version  1
	I1225 13:23:38.797691 1483434 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:23:38.798084 1483434 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:23:38.800720 1483434 out.go:177] * Stopping node "default-k8s-diff-port-344803"  ...
	I1225 13:23:38.802365 1483434 main.go:141] libmachine: Stopping "default-k8s-diff-port-344803"...
	I1225 13:23:38.802387 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:23:38.804431 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Stop
	I1225 13:23:38.807841 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 0/60
	I1225 13:23:39.809537 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 1/60
	I1225 13:23:40.811319 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 2/60
	I1225 13:23:41.812862 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 3/60
	I1225 13:23:42.814387 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 4/60
	I1225 13:23:43.816342 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 5/60
	I1225 13:23:44.818790 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 6/60
	I1225 13:23:45.820164 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 7/60
	I1225 13:23:46.821697 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 8/60
	I1225 13:23:47.823134 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 9/60
	I1225 13:23:48.824951 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 10/60
	I1225 13:23:49.826468 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 11/60
	I1225 13:23:50.828074 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 12/60
	I1225 13:23:51.829598 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 13/60
	I1225 13:23:52.831180 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 14/60
	I1225 13:23:53.832767 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 15/60
	I1225 13:23:54.834399 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 16/60
	I1225 13:23:55.835824 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 17/60
	I1225 13:23:56.837431 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 18/60
	I1225 13:23:57.839005 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 19/60
	I1225 13:23:58.841480 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 20/60
	I1225 13:23:59.843267 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 21/60
	I1225 13:24:00.845095 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 22/60
	I1225 13:24:01.846511 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 23/60
	I1225 13:24:02.848114 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 24/60
	I1225 13:24:03.849818 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 25/60
	I1225 13:24:04.851492 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 26/60
	I1225 13:24:05.853099 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 27/60
	I1225 13:24:06.854728 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 28/60
	I1225 13:24:07.856638 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 29/60
	I1225 13:24:08.859214 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 30/60
	I1225 13:24:09.860852 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 31/60
	I1225 13:24:10.862541 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 32/60
	I1225 13:24:11.864187 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 33/60
	I1225 13:24:12.865571 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 34/60
	I1225 13:24:13.867741 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 35/60
	I1225 13:24:14.869551 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 36/60
	I1225 13:24:15.871125 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 37/60
	I1225 13:24:16.872810 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 38/60
	I1225 13:24:17.874307 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 39/60
	I1225 13:24:18.876564 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 40/60
	I1225 13:24:19.878047 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 41/60
	I1225 13:24:20.879721 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 42/60
	I1225 13:24:21.881265 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 43/60
	I1225 13:24:22.882966 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 44/60
	I1225 13:24:23.885199 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 45/60
	I1225 13:24:24.886849 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 46/60
	I1225 13:24:25.888439 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 47/60
	I1225 13:24:26.890112 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 48/60
	I1225 13:24:27.891810 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 49/60
	I1225 13:24:28.894049 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 50/60
	I1225 13:24:29.895688 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 51/60
	I1225 13:24:30.897842 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 52/60
	I1225 13:24:31.899640 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 53/60
	I1225 13:24:32.901364 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 54/60
	I1225 13:24:33.903593 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 55/60
	I1225 13:24:34.905136 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 56/60
	I1225 13:24:35.906977 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 57/60
	I1225 13:24:36.908556 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 58/60
	I1225 13:24:37.910288 1483434 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for machine to stop 59/60
	I1225 13:24:38.911489 1483434 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1225 13:24:38.911542 1483434 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:24:38.913718 1483434 out.go:177] 
	W1225 13:24:38.915175 1483434 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1225 13:24:38.915201 1483434 out.go:239] * 
	* 
	W1225 13:24:38.928372 1483434 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 13:24:38.930378 1483434 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-344803 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803: exit status 3 (18.457968348s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:24:57.390931 1483864 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.39:22: connect: no route to host
	E1225 13:24:57.390957 1483864 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.39:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-344803" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880612 -n embed-certs-880612
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880612 -n embed-certs-880612: exit status 3 (3.199726952s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:24:37.294831 1483805 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.179:22: connect: no route to host
	E1225 13:24:37.294864 1483805 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.179:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-880612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-880612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.163305137s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.179:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-880612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880612 -n embed-certs-880612
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880612 -n embed-certs-880612: exit status 3 (3.052085949s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:24:46.510894 1483905 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.179:22: connect: no route to host
	E1225 13:24:46.510929 1483905 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.179:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-880612" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803: exit status 3 (3.200129307s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:25:00.590906 1484003 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.39:22: connect: no route to host
	E1225 13:25:00.590946 1484003 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.39:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-344803 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-344803 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.164371099s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.39:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-344803 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803: exit status 3 (3.050885329s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:25:09.806883 1484073 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.39:22: connect: no route to host
	E1225 13:25:09.806915 1484073 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.39:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-344803" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1225 13:28:56.707125 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 13:29:07.348051 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-198979 -n old-k8s-version-198979
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-25 13:37:02.65249233 +0000 UTC m=+4847.270969361
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-198979 -n old-k8s-version-198979
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-198979 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-198979 logs -n 25: (1.732721796s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-435411                           | kubernetes-upgrade-435411    | jenkins | v1.32.0 | 25 Dec 23 13:17 UTC | 25 Dec 23 13:17 UTC |
	| start   | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:17 UTC | 25 Dec 23 13:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-021022                              | cert-expiration-021022       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC | 25 Dec 23 13:19 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-198979        | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC | 25 Dec 23 13:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-330063             | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-021022                              | cert-expiration-021022       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	| delete  | -p                                                     | disable-driver-mounts-246503 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	|         | disable-driver-mounts-246503                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:22 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-198979             | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-330063                  | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880612            | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-344803  | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880612                 | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-344803       | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC | 25 Dec 23 13:36 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 13:25:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 13:25:09.868120 1484104 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:25:09.868323 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:25:09.868335 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:25:09.868341 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:25:09.868532 1484104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:25:09.869122 1484104 out.go:303] Setting JSON to false
	I1225 13:25:09.870130 1484104 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":158863,"bootTime":1703351847,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 13:25:09.870205 1484104 start.go:138] virtualization: kvm guest
	I1225 13:25:09.872541 1484104 out.go:177] * [default-k8s-diff-port-344803] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 13:25:09.874217 1484104 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 13:25:09.874305 1484104 notify.go:220] Checking for updates...
	I1225 13:25:09.875839 1484104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 13:25:09.877587 1484104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:25:09.879065 1484104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:25:09.880503 1484104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 13:25:09.881819 1484104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 13:25:09.883607 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:25:09.884026 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:09.884110 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:09.899270 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I1225 13:25:09.899708 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:09.900286 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:25:09.900337 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:09.900687 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:09.900912 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:25:09.901190 1484104 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 13:25:09.901525 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:09.901579 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:09.916694 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39937
	I1225 13:25:09.917130 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:09.917673 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:25:09.917704 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:09.918085 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:09.918333 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:25:09.953536 1484104 out.go:177] * Using the kvm2 driver based on existing profile
	I1225 13:25:09.955050 1484104 start.go:298] selected driver: kvm2
	I1225 13:25:09.955065 1484104 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:25:09.955241 1484104 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 13:25:09.955956 1484104 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:25:09.956047 1484104 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 13:25:09.971769 1484104 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 13:25:09.972199 1484104 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 13:25:09.972296 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:25:09.972313 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:25:09.972334 1484104 start_flags.go:323] config:
	{Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-34480
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:25:09.972534 1484104 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:25:09.975411 1484104 out.go:177] * Starting control plane node default-k8s-diff-port-344803 in cluster default-k8s-diff-port-344803
	I1225 13:25:07.694690 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:09.976744 1484104 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:25:09.976814 1484104 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1225 13:25:09.976830 1484104 cache.go:56] Caching tarball of preloaded images
	I1225 13:25:09.976928 1484104 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 13:25:09.976941 1484104 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1225 13:25:09.977353 1484104 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/config.json ...
	I1225 13:25:09.977710 1484104 start.go:365] acquiring machines lock for default-k8s-diff-port-344803: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:25:10.766734 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:16.850681 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:19.922690 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:25.998796 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:29.070780 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:35.150661 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:38.222822 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:44.302734 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:50.379073 1483118 start.go:369] acquired machines lock for "no-preload-330063" in 3m45.211894916s
	I1225 13:25:50.379143 1483118 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:25:50.379155 1483118 fix.go:54] fixHost starting: 
	I1225 13:25:50.379692 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:50.379739 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:50.395491 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I1225 13:25:50.395953 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:50.396490 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:25:50.396512 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:50.396880 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:50.397080 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:25:50.397224 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:25:50.399083 1483118 fix.go:102] recreateIfNeeded on no-preload-330063: state=Stopped err=<nil>
	I1225 13:25:50.399110 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	W1225 13:25:50.399283 1483118 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:25:50.401483 1483118 out.go:177] * Restarting existing kvm2 VM for "no-preload-330063" ...
	I1225 13:25:47.374782 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:50.376505 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:25:50.376562 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:25:50.378895 1482618 machine.go:91] provisioned docker machine in 4m37.578359235s
	I1225 13:25:50.378958 1482618 fix.go:56] fixHost completed within 4m37.60680956s
	I1225 13:25:50.378968 1482618 start.go:83] releasing machines lock for "old-k8s-version-198979", held for 4m37.606859437s
	W1225 13:25:50.378992 1482618 start.go:694] error starting host: provision: host is not running
	W1225 13:25:50.379100 1482618 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1225 13:25:50.379111 1482618 start.go:709] Will try again in 5 seconds ...
	I1225 13:25:50.403280 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Start
	I1225 13:25:50.403507 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring networks are active...
	I1225 13:25:50.404422 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring network default is active
	I1225 13:25:50.404784 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring network mk-no-preload-330063 is active
	I1225 13:25:50.405087 1483118 main.go:141] libmachine: (no-preload-330063) Getting domain xml...
	I1225 13:25:50.405654 1483118 main.go:141] libmachine: (no-preload-330063) Creating domain...
	I1225 13:25:51.676192 1483118 main.go:141] libmachine: (no-preload-330063) Waiting to get IP...
	I1225 13:25:51.677110 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:51.677638 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:51.677715 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:51.677616 1484268 retry.go:31] will retry after 268.018359ms: waiting for machine to come up
	I1225 13:25:51.947683 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:51.948172 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:51.948198 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:51.948118 1484268 retry.go:31] will retry after 278.681465ms: waiting for machine to come up
	I1225 13:25:52.228745 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.229234 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.229265 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.229166 1484268 retry.go:31] will retry after 329.72609ms: waiting for machine to come up
	I1225 13:25:52.560878 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.561315 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.561348 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.561257 1484268 retry.go:31] will retry after 398.659264ms: waiting for machine to come up
	I1225 13:25:52.962067 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.962596 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.962620 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.962548 1484268 retry.go:31] will retry after 474.736894ms: waiting for machine to come up
	I1225 13:25:53.439369 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:53.439834 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:53.439856 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:53.439795 1484268 retry.go:31] will retry after 632.915199ms: waiting for machine to come up
	I1225 13:25:54.074832 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:54.075320 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:54.075349 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:54.075286 1484268 retry.go:31] will retry after 889.605242ms: waiting for machine to come up
	I1225 13:25:54.966323 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:54.966800 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:54.966822 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:54.966757 1484268 retry.go:31] will retry after 1.322678644s: waiting for machine to come up
	I1225 13:25:55.379741 1482618 start.go:365] acquiring machines lock for old-k8s-version-198979: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:25:56.291182 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:56.291604 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:56.291633 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:56.291567 1484268 retry.go:31] will retry after 1.717647471s: waiting for machine to come up
	I1225 13:25:58.011626 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:58.012081 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:58.012116 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:58.012018 1484268 retry.go:31] will retry after 2.29935858s: waiting for machine to come up
	I1225 13:26:00.314446 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:00.314833 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:00.314858 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:00.314806 1484268 retry.go:31] will retry after 2.50206405s: waiting for machine to come up
	I1225 13:26:02.819965 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:02.820458 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:02.820490 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:02.820403 1484268 retry.go:31] will retry after 2.332185519s: waiting for machine to come up
	I1225 13:26:05.155725 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:05.156228 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:05.156263 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:05.156153 1484268 retry.go:31] will retry after 2.769754662s: waiting for machine to come up
	I1225 13:26:07.929629 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:07.930087 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:07.930126 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:07.930040 1484268 retry.go:31] will retry after 4.407133766s: waiting for machine to come up
	I1225 13:26:13.687348 1483946 start.go:369] acquired machines lock for "embed-certs-880612" in 1m27.002513209s
	I1225 13:26:13.687426 1483946 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:13.687436 1483946 fix.go:54] fixHost starting: 
	I1225 13:26:13.687850 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:13.687916 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:13.706054 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I1225 13:26:13.706521 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:13.707063 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:26:13.707087 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:13.707472 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:13.707645 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:13.707832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:26:13.709643 1483946 fix.go:102] recreateIfNeeded on embed-certs-880612: state=Stopped err=<nil>
	I1225 13:26:13.709676 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	W1225 13:26:13.709868 1483946 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:13.712452 1483946 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880612" ...
	I1225 13:26:12.339674 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.340219 1483118 main.go:141] libmachine: (no-preload-330063) Found IP for machine: 192.168.72.232
	I1225 13:26:12.340243 1483118 main.go:141] libmachine: (no-preload-330063) Reserving static IP address...
	I1225 13:26:12.340263 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has current primary IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.340846 1483118 main.go:141] libmachine: (no-preload-330063) Reserved static IP address: 192.168.72.232
	I1225 13:26:12.340896 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "no-preload-330063", mac: "52:54:00:e9:c3:b6", ip: "192.168.72.232"} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.340912 1483118 main.go:141] libmachine: (no-preload-330063) Waiting for SSH to be available...
	I1225 13:26:12.340947 1483118 main.go:141] libmachine: (no-preload-330063) DBG | skip adding static IP to network mk-no-preload-330063 - found existing host DHCP lease matching {name: "no-preload-330063", mac: "52:54:00:e9:c3:b6", ip: "192.168.72.232"}
	I1225 13:26:12.340962 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Getting to WaitForSSH function...
	I1225 13:26:12.343164 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.343417 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.343448 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.343552 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Using SSH client type: external
	I1225 13:26:12.343566 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa (-rw-------)
	I1225 13:26:12.343587 1483118 main.go:141] libmachine: (no-preload-330063) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:12.343595 1483118 main.go:141] libmachine: (no-preload-330063) DBG | About to run SSH command:
	I1225 13:26:12.343603 1483118 main.go:141] libmachine: (no-preload-330063) DBG | exit 0
	I1225 13:26:12.434661 1483118 main.go:141] libmachine: (no-preload-330063) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:12.435101 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetConfigRaw
	I1225 13:26:12.435827 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:12.438300 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.438673 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.438705 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.438870 1483118 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/config.json ...
	I1225 13:26:12.439074 1483118 machine.go:88] provisioning docker machine ...
	I1225 13:26:12.439093 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:12.439335 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.439556 1483118 buildroot.go:166] provisioning hostname "no-preload-330063"
	I1225 13:26:12.439584 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.439789 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.442273 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.442630 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.442661 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.442768 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.442956 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.443127 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.443271 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.443410 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:12.443772 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:12.443787 1483118 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-330063 && echo "no-preload-330063" | sudo tee /etc/hostname
	I1225 13:26:12.581579 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-330063
	
	I1225 13:26:12.581609 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.584621 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.584949 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.584979 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.585252 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.585495 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.585656 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.585790 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.585947 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:12.586320 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:12.586346 1483118 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-330063' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-330063/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-330063' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:12.717139 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:12.717176 1483118 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:12.717197 1483118 buildroot.go:174] setting up certificates
	I1225 13:26:12.717212 1483118 provision.go:83] configureAuth start
	I1225 13:26:12.717229 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.717570 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:12.720469 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.720828 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.720859 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.721016 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.723432 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.723758 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.723815 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.723944 1483118 provision.go:138] copyHostCerts
	I1225 13:26:12.724021 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:12.724035 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:12.724102 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:12.724207 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:12.724215 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:12.724242 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:12.724323 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:12.724330 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:12.724351 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:12.724408 1483118 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.no-preload-330063 san=[192.168.72.232 192.168.72.232 localhost 127.0.0.1 minikube no-preload-330063]
	I1225 13:26:12.929593 1483118 provision.go:172] copyRemoteCerts
	I1225 13:26:12.929665 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:12.929699 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.932608 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.932934 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.932978 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.933144 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.933389 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.933581 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.933738 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.023574 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:13.047157 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1225 13:26:13.070779 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:26:13.094249 1483118 provision.go:86] duration metric: configureAuth took 377.018818ms
	I1225 13:26:13.094284 1483118 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:13.094538 1483118 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:26:13.094665 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.097705 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.098133 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.098179 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.098429 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.098708 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.098888 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.099029 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.099191 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:13.099516 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:13.099534 1483118 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:13.430084 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:13.430138 1483118 machine.go:91] provisioned docker machine in 991.050011ms
	I1225 13:26:13.430150 1483118 start.go:300] post-start starting for "no-preload-330063" (driver="kvm2")
	I1225 13:26:13.430162 1483118 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:13.430185 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.430616 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:13.430661 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.433623 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.434018 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.434054 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.434191 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.434413 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.434586 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.434700 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.523954 1483118 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:13.528009 1483118 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:13.528040 1483118 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:13.528118 1483118 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:13.528214 1483118 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:13.528359 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:13.536826 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:13.561011 1483118 start.go:303] post-start completed in 130.840608ms
	I1225 13:26:13.561046 1483118 fix.go:56] fixHost completed within 23.181891118s
	I1225 13:26:13.561078 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.563717 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.564040 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.564087 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.564268 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.564504 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.564702 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.564812 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.564965 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:13.565326 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:13.565340 1483118 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:13.687155 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510773.671808211
	
	I1225 13:26:13.687181 1483118 fix.go:206] guest clock: 1703510773.671808211
	I1225 13:26:13.687189 1483118 fix.go:219] Guest: 2023-12-25 13:26:13.671808211 +0000 UTC Remote: 2023-12-25 13:26:13.561052282 +0000 UTC m=+248.574935292 (delta=110.755929ms)
	I1225 13:26:13.687209 1483118 fix.go:190] guest clock delta is within tolerance: 110.755929ms
	I1225 13:26:13.687214 1483118 start.go:83] releasing machines lock for "no-preload-330063", held for 23.308100249s
	I1225 13:26:13.687243 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.687561 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:13.690172 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.690572 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.690604 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.690780 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691362 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691534 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691615 1483118 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:13.691670 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.691807 1483118 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:13.691842 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.694593 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.694871 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.694943 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.694967 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.695202 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.695293 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.695319 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.695452 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.695508 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.695613 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.695725 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.695813 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.695899 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.696068 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.812135 1483118 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:13.817944 1483118 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:13.965641 1483118 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:13.973263 1483118 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:13.973433 1483118 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:13.991077 1483118 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:13.991112 1483118 start.go:475] detecting cgroup driver to use...
	I1225 13:26:13.991197 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:14.005649 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:14.018464 1483118 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:14.018540 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:14.031361 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:14.046011 1483118 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:14.152826 1483118 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:14.281488 1483118 docker.go:219] disabling docker service ...
	I1225 13:26:14.281577 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:14.297584 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:14.311896 1483118 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:14.448141 1483118 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:14.583111 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:14.599419 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:14.619831 1483118 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:14.619909 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.631979 1483118 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:14.632065 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.643119 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.655441 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.666525 1483118 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:14.678080 1483118 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:14.687889 1483118 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:14.687957 1483118 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:14.702290 1483118 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:14.712225 1483118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:14.836207 1483118 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:15.019332 1483118 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:15.019424 1483118 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:15.024755 1483118 start.go:543] Will wait 60s for crictl version
	I1225 13:26:15.024844 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.028652 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:15.074415 1483118 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:15.074550 1483118 ssh_runner.go:195] Run: crio --version
	I1225 13:26:15.128559 1483118 ssh_runner.go:195] Run: crio --version
	I1225 13:26:15.178477 1483118 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1225 13:26:13.714488 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Start
	I1225 13:26:13.714708 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring networks are active...
	I1225 13:26:13.715513 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring network default is active
	I1225 13:26:13.715868 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring network mk-embed-certs-880612 is active
	I1225 13:26:13.716279 1483946 main.go:141] libmachine: (embed-certs-880612) Getting domain xml...
	I1225 13:26:13.716905 1483946 main.go:141] libmachine: (embed-certs-880612) Creating domain...
	I1225 13:26:15.049817 1483946 main.go:141] libmachine: (embed-certs-880612) Waiting to get IP...
	I1225 13:26:15.051040 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.051641 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.051756 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.051615 1484395 retry.go:31] will retry after 199.911042ms: waiting for machine to come up
	I1225 13:26:15.253158 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.260582 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.260620 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.260519 1484395 retry.go:31] will retry after 285.022636ms: waiting for machine to come up
	I1225 13:26:15.547290 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.547756 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.547787 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.547692 1484395 retry.go:31] will retry after 327.637369ms: waiting for machine to come up
	I1225 13:26:15.877618 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.878119 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.878153 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.878058 1484395 retry.go:31] will retry after 384.668489ms: waiting for machine to come up
	I1225 13:26:16.264592 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:16.265056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:16.265084 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:16.265005 1484395 retry.go:31] will retry after 468.984683ms: waiting for machine to come up
	I1225 13:26:15.180205 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:15.183372 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:15.183820 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:15.183862 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:15.184054 1483118 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:15.188935 1483118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:15.202790 1483118 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1225 13:26:15.202839 1483118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:15.245267 1483118 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1225 13:26:15.245297 1483118 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1225 13:26:15.245409 1483118 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.245430 1483118 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.245448 1483118 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.245467 1483118 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.245468 1483118 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1225 13:26:15.245534 1483118 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.245447 1483118 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.245404 1483118 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.247839 1483118 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.247850 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.247874 1483118 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.247911 1483118 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.247980 1483118 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1225 13:26:15.247984 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.248043 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.248281 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.404332 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.405729 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.407712 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1225 13:26:15.412419 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.413201 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.413349 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.453117 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.533541 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.536843 1483118 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1225 13:26:15.536896 1483118 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.536950 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.576965 1483118 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1225 13:26:15.577010 1483118 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.577078 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688643 1483118 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1225 13:26:15.688696 1483118 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.688710 1483118 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1225 13:26:15.688750 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688759 1483118 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.688765 1483118 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1225 13:26:15.688794 1483118 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.688813 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688835 1483118 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1225 13:26:15.688847 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688858 1483118 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.688869 1483118 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1225 13:26:15.688890 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688896 1483118 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.688910 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.688921 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688949 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.706288 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.779043 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1225 13:26:15.779170 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.779219 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.779219 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1225 13:26:15.779181 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.779297 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:15.779309 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.779274 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.779439 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:15.779507 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:15.864891 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:15.865017 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:15.884972 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1225 13:26:15.885024 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:15.885035 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1225 13:26:15.885045 1483118 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.885091 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.885094 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:15.885109 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:15.885146 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1225 13:26:15.885167 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1225 13:26:15.885229 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:15.885239 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1225 13:26:15.885273 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1225 13:26:15.898753 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1225 13:26:17.966777 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.08165399s)
	I1225 13:26:17.966822 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1225 13:26:17.966836 1483118 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.081714527s)
	I1225 13:26:17.966865 1483118 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.081735795s)
	I1225 13:26:17.966848 1483118 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:17.966894 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1225 13:26:17.966874 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1225 13:26:17.966936 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:16.736013 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:16.736519 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:16.736553 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:16.736449 1484395 retry.go:31] will retry after 873.004128ms: waiting for machine to come up
	I1225 13:26:17.611675 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:17.612135 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:17.612160 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:17.612085 1484395 retry.go:31] will retry after 1.093577821s: waiting for machine to come up
	I1225 13:26:18.707411 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:18.707936 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:18.707994 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:18.707904 1484395 retry.go:31] will retry after 1.364130049s: waiting for machine to come up
	I1225 13:26:20.074559 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:20.075102 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:20.075135 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:20.075033 1484395 retry.go:31] will retry after 1.740290763s: waiting for machine to come up
	I1225 13:26:21.677915 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.710943608s)
	I1225 13:26:21.677958 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1225 13:26:21.677990 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:21.678050 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:23.630977 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.952875837s)
	I1225 13:26:23.631018 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1225 13:26:23.631051 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:23.631112 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:21.818166 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:21.818695 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:21.818728 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:21.818641 1484395 retry.go:31] will retry after 2.035498479s: waiting for machine to come up
	I1225 13:26:23.856368 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:23.857094 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:23.857120 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:23.856997 1484395 retry.go:31] will retry after 2.331127519s: waiting for machine to come up
	I1225 13:26:26.191862 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:26.192571 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:26.192608 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:26.192513 1484395 retry.go:31] will retry after 3.191632717s: waiting for machine to come up
	I1225 13:26:26.193816 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.56267278s)
	I1225 13:26:26.193849 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1225 13:26:26.193884 1483118 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:26.193951 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:27.342879 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.148892619s)
	I1225 13:26:27.342913 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1225 13:26:27.342948 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:27.343014 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:29.909035 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.565991605s)
	I1225 13:26:29.909080 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1225 13:26:29.909105 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:29.909159 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:29.386007 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:29.386335 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:29.386366 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:29.386294 1484395 retry.go:31] will retry after 3.786228584s: waiting for machine to come up
	I1225 13:26:34.439583 1484104 start.go:369] acquired machines lock for "default-k8s-diff-port-344803" in 1m24.461830001s
	I1225 13:26:34.439666 1484104 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:34.439686 1484104 fix.go:54] fixHost starting: 
	I1225 13:26:34.440164 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:34.440230 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:34.457403 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46037
	I1225 13:26:34.457867 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:34.458351 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:26:34.458422 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:34.458748 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:34.458989 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:34.459176 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:26:34.460975 1484104 fix.go:102] recreateIfNeeded on default-k8s-diff-port-344803: state=Stopped err=<nil>
	I1225 13:26:34.461008 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	W1225 13:26:34.461188 1484104 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:34.463715 1484104 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-344803" ...
	I1225 13:26:34.465022 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Start
	I1225 13:26:34.465274 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring networks are active...
	I1225 13:26:34.466182 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring network default is active
	I1225 13:26:34.466565 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring network mk-default-k8s-diff-port-344803 is active
	I1225 13:26:34.466922 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Getting domain xml...
	I1225 13:26:34.467691 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Creating domain...
	I1225 13:26:32.065345 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.15614946s)
	I1225 13:26:32.065380 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1225 13:26:32.065414 1483118 cache_images.go:123] Successfully loaded all cached images
	I1225 13:26:32.065421 1483118 cache_images.go:92] LoadImages completed in 16.820112197s
	I1225 13:26:32.065498 1483118 ssh_runner.go:195] Run: crio config
	I1225 13:26:32.120989 1483118 cni.go:84] Creating CNI manager for ""
	I1225 13:26:32.121019 1483118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:32.121045 1483118 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:26:32.121063 1483118 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.232 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-330063 NodeName:no-preload-330063 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:26:32.121216 1483118 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-330063"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:26:32.121297 1483118 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-330063 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-330063 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:26:32.121357 1483118 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1225 13:26:32.132569 1483118 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:26:32.132677 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:26:32.142052 1483118 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1225 13:26:32.158590 1483118 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1225 13:26:32.174761 1483118 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1225 13:26:32.191518 1483118 ssh_runner.go:195] Run: grep 192.168.72.232	control-plane.minikube.internal$ /etc/hosts
	I1225 13:26:32.195353 1483118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:32.206845 1483118 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063 for IP: 192.168.72.232
	I1225 13:26:32.206879 1483118 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:32.207098 1483118 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:26:32.207145 1483118 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:26:32.207212 1483118 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.key
	I1225 13:26:32.207270 1483118 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.key.4e9d87c6
	I1225 13:26:32.207323 1483118 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.key
	I1225 13:26:32.207437 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:26:32.207465 1483118 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:26:32.207475 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:26:32.207513 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:26:32.207539 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:26:32.207565 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:26:32.207607 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:32.208427 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:26:32.231142 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:26:32.253335 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:26:32.275165 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:26:32.297762 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:26:32.320671 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:26:32.344125 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:26:32.368066 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:26:32.390688 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:26:32.412849 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:26:32.435445 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:26:32.457687 1483118 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:26:32.474494 1483118 ssh_runner.go:195] Run: openssl version
	I1225 13:26:32.480146 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:26:32.491141 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.495831 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.495902 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.501393 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:26:32.511643 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:26:32.521843 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.526421 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.526514 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.531988 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:26:32.542920 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:26:32.553604 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.558381 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.558478 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.563913 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:26:32.574591 1483118 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:26:32.579046 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:26:32.584821 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:26:32.590781 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:26:32.596456 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:26:32.601978 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:26:32.607981 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:26:32.613785 1483118 kubeadm.go:404] StartCluster: {Name:no-preload-330063 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-330063 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:26:32.613897 1483118 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:26:32.613955 1483118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:32.651782 1483118 cri.go:89] found id: ""
	I1225 13:26:32.651858 1483118 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:26:32.664617 1483118 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:26:32.664648 1483118 kubeadm.go:636] restartCluster start
	I1225 13:26:32.664710 1483118 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:26:32.674727 1483118 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:32.676090 1483118 kubeconfig.go:92] found "no-preload-330063" server: "https://192.168.72.232:8443"
	I1225 13:26:32.679085 1483118 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:26:32.689716 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:32.689824 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:32.702305 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.189843 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:33.189955 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:33.202514 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.689935 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:33.690048 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:33.703975 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:34.190601 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:34.190722 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:34.203987 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:34.690505 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:34.690639 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:34.701704 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.173890 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.174349 1483946 main.go:141] libmachine: (embed-certs-880612) Found IP for machine: 192.168.50.179
	I1225 13:26:33.174372 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has current primary IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.174405 1483946 main.go:141] libmachine: (embed-certs-880612) Reserving static IP address...
	I1225 13:26:33.174805 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "embed-certs-880612", mac: "52:54:00:a2:ab:67", ip: "192.168.50.179"} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.174845 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | skip adding static IP to network mk-embed-certs-880612 - found existing host DHCP lease matching {name: "embed-certs-880612", mac: "52:54:00:a2:ab:67", ip: "192.168.50.179"}
	I1225 13:26:33.174860 1483946 main.go:141] libmachine: (embed-certs-880612) Reserved static IP address: 192.168.50.179
	I1225 13:26:33.174877 1483946 main.go:141] libmachine: (embed-certs-880612) Waiting for SSH to be available...
	I1225 13:26:33.174892 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Getting to WaitForSSH function...
	I1225 13:26:33.177207 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.177579 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.177609 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.177711 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Using SSH client type: external
	I1225 13:26:33.177737 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa (-rw-------)
	I1225 13:26:33.177777 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:33.177790 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | About to run SSH command:
	I1225 13:26:33.177803 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | exit 0
	I1225 13:26:33.274328 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:33.274736 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetConfigRaw
	I1225 13:26:33.275462 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:33.278056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.278429 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.278483 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.278725 1483946 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/config.json ...
	I1225 13:26:33.278982 1483946 machine.go:88] provisioning docker machine ...
	I1225 13:26:33.279013 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:33.279236 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.279448 1483946 buildroot.go:166] provisioning hostname "embed-certs-880612"
	I1225 13:26:33.279468 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.279619 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.281930 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.282277 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.282311 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.282474 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.282704 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.282885 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.283033 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.283194 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.283700 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.283723 1483946 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880612 && echo "embed-certs-880612" | sudo tee /etc/hostname
	I1225 13:26:33.433456 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880612
	
	I1225 13:26:33.433483 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.436392 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.436794 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.436840 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.437004 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.437233 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.437446 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.437595 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.437783 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.438112 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.438134 1483946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880612/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:33.579776 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:33.579813 1483946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:33.579845 1483946 buildroot.go:174] setting up certificates
	I1225 13:26:33.579859 1483946 provision.go:83] configureAuth start
	I1225 13:26:33.579874 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.580151 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:33.582843 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.583233 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.583266 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.583461 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.585844 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.586216 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.586253 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.586454 1483946 provision.go:138] copyHostCerts
	I1225 13:26:33.586532 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:33.586548 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:33.586604 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:33.586692 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:33.586704 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:33.586723 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:33.586771 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:33.586778 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:33.586797 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:33.586837 1483946 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880612 san=[192.168.50.179 192.168.50.179 localhost 127.0.0.1 minikube embed-certs-880612]
	I1225 13:26:33.640840 1483946 provision.go:172] copyRemoteCerts
	I1225 13:26:33.640921 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:33.640951 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.643970 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.644390 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.644419 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.644606 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.644877 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.645065 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.645204 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:33.744907 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:33.769061 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1225 13:26:33.792125 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:26:33.816115 1483946 provision.go:86] duration metric: configureAuth took 236.215977ms
	I1225 13:26:33.816159 1483946 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:33.816373 1483946 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:26:33.816497 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.819654 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.820075 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.820108 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.820287 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.820519 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.820738 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.820873 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.821068 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.821403 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.821428 1483946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:34.159844 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:34.159882 1483946 machine.go:91] provisioned docker machine in 880.882549ms
	I1225 13:26:34.159897 1483946 start.go:300] post-start starting for "embed-certs-880612" (driver="kvm2")
	I1225 13:26:34.159934 1483946 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:34.159964 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.160327 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:34.160358 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.163009 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.163367 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.163400 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.163600 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.163801 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.163943 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.164093 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.261072 1483946 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:34.265655 1483946 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:34.265686 1483946 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:34.265777 1483946 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:34.265881 1483946 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:34.265996 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:34.276013 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:34.299731 1483946 start.go:303] post-start completed in 139.812994ms
	I1225 13:26:34.299783 1483946 fix.go:56] fixHost completed within 20.612345515s
	I1225 13:26:34.299813 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.302711 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.303189 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.303229 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.303363 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.303617 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.303856 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.304000 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.304198 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:34.304522 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:34.304535 1483946 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:34.439399 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510794.384723199
	
	I1225 13:26:34.439426 1483946 fix.go:206] guest clock: 1703510794.384723199
	I1225 13:26:34.439433 1483946 fix.go:219] Guest: 2023-12-25 13:26:34.384723199 +0000 UTC Remote: 2023-12-25 13:26:34.29978875 +0000 UTC m=+107.780041384 (delta=84.934449ms)
	I1225 13:26:34.439468 1483946 fix.go:190] guest clock delta is within tolerance: 84.934449ms
	I1225 13:26:34.439475 1483946 start.go:83] releasing machines lock for "embed-certs-880612", held for 20.75208465s
	I1225 13:26:34.439518 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.439832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:34.442677 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.443002 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.443031 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.443219 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.443827 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.444029 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.444168 1483946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:34.444225 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.444259 1483946 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:34.444295 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.447106 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447136 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447497 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.447533 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.447553 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447571 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447677 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.447719 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.447860 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.447904 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.447982 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.448094 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.448170 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.448219 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.572590 1483946 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:34.578648 1483946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:34.723874 1483946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:34.731423 1483946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:34.731495 1483946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:34.752447 1483946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:34.752478 1483946 start.go:475] detecting cgroup driver to use...
	I1225 13:26:34.752539 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:34.766782 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:34.781457 1483946 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:34.781548 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:34.798097 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:34.813743 1483946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:34.936843 1483946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:35.053397 1483946 docker.go:219] disabling docker service ...
	I1225 13:26:35.053478 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:35.067702 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:35.079670 1483946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:35.213241 1483946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:35.346105 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:35.359207 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:35.377259 1483946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:35.377347 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.388026 1483946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:35.388116 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.398180 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.411736 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.425888 1483946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:35.436586 1483946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:35.446969 1483946 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:35.447028 1483946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:35.461401 1483946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:35.471896 1483946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:35.619404 1483946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:35.825331 1483946 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:35.825410 1483946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:35.830699 1483946 start.go:543] Will wait 60s for crictl version
	I1225 13:26:35.830779 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:26:35.834938 1483946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:35.874595 1483946 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:35.874717 1483946 ssh_runner.go:195] Run: crio --version
	I1225 13:26:35.924227 1483946 ssh_runner.go:195] Run: crio --version
	I1225 13:26:35.982707 1483946 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 13:26:35.984401 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:35.987241 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:35.987669 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:35.987708 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:35.987991 1483946 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:35.992383 1483946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:36.004918 1483946 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:26:36.005000 1483946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:36.053783 1483946 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 13:26:36.053887 1483946 ssh_runner.go:195] Run: which lz4
	I1225 13:26:36.058040 1483946 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:26:36.062730 1483946 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:26:36.062785 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 13:26:35.824151 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting to get IP...
	I1225 13:26:35.825061 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:35.825643 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:35.825741 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:35.825605 1484550 retry.go:31] will retry after 292.143168ms: waiting for machine to come up
	I1225 13:26:36.119220 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.119741 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.119787 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.119666 1484550 retry.go:31] will retry after 250.340048ms: waiting for machine to come up
	I1225 13:26:36.372343 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.372894 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.372932 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.372840 1484550 retry.go:31] will retry after 434.335692ms: waiting for machine to come up
	I1225 13:26:36.808477 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.809037 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.809070 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.808999 1484550 retry.go:31] will retry after 455.184367ms: waiting for machine to come up
	I1225 13:26:37.265791 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.266330 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.266364 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:37.266278 1484550 retry.go:31] will retry after 487.994897ms: waiting for machine to come up
	I1225 13:26:37.756220 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.756745 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.756774 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:37.756699 1484550 retry.go:31] will retry after 817.108831ms: waiting for machine to come up
	I1225 13:26:38.575846 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:38.576271 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:38.576301 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:38.576222 1484550 retry.go:31] will retry after 1.022104679s: waiting for machine to come up
	I1225 13:26:39.600386 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:39.600863 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:39.600901 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:39.600796 1484550 retry.go:31] will retry after 1.318332419s: waiting for machine to come up
	I1225 13:26:35.190721 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:35.190828 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:35.203971 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:35.689934 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:35.690032 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:35.701978 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:36.190256 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:36.190355 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:36.204476 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:36.689969 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:36.690062 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:36.706632 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.189808 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:37.189921 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:37.203895 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.690391 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:37.690499 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:37.704914 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:38.190575 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:38.190694 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:38.208546 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:38.690090 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:38.690260 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:38.701827 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:39.190421 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:39.190549 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:39.202377 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:39.689978 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:39.690104 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:39.706511 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.963805 1483946 crio.go:444] Took 1.905809 seconds to copy over tarball
	I1225 13:26:37.963892 1483946 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:26:40.988182 1483946 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.024256156s)
	I1225 13:26:40.988214 1483946 crio.go:451] Took 3.024377 seconds to extract the tarball
	I1225 13:26:40.988225 1483946 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:26:41.030256 1483946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:41.085117 1483946 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 13:26:41.085147 1483946 cache_images.go:84] Images are preloaded, skipping loading
	I1225 13:26:41.085236 1483946 ssh_runner.go:195] Run: crio config
	I1225 13:26:41.149962 1483946 cni.go:84] Creating CNI manager for ""
	I1225 13:26:41.149993 1483946 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:41.150020 1483946 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:26:41.150044 1483946 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.179 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880612 NodeName:embed-certs-880612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:26:41.150237 1483946 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880612"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:26:41.150312 1483946 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-880612 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-880612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:26:41.150367 1483946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 13:26:41.160557 1483946 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:26:41.160681 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:26:41.170564 1483946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1225 13:26:41.187315 1483946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:26:41.204638 1483946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1225 13:26:41.222789 1483946 ssh_runner.go:195] Run: grep 192.168.50.179	control-plane.minikube.internal$ /etc/hosts
	I1225 13:26:41.226604 1483946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:41.238315 1483946 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612 for IP: 192.168.50.179
	I1225 13:26:41.238363 1483946 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:41.238614 1483946 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:26:41.238665 1483946 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:26:41.238768 1483946 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/client.key
	I1225 13:26:41.238860 1483946 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.key.518daada
	I1225 13:26:41.238925 1483946 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.key
	I1225 13:26:41.239060 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:26:41.239098 1483946 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:26:41.239122 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:26:41.239167 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:26:41.239204 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:26:41.239245 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:26:41.239300 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:41.240235 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:26:41.265422 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:26:41.290398 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:26:41.315296 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:26:41.339984 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:26:41.363071 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:26:41.392035 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:26:41.419673 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:26:41.444242 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:26:41.468314 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:26:41.493811 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:26:41.518255 1483946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:26:41.535605 1483946 ssh_runner.go:195] Run: openssl version
	I1225 13:26:41.541254 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:26:41.551784 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.556610 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.556686 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.562299 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:26:41.572173 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:26:40.921702 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:40.922293 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:40.922335 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:40.922225 1484550 retry.go:31] will retry after 1.835505717s: waiting for machine to come up
	I1225 13:26:42.760187 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:42.760688 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:42.760714 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:42.760625 1484550 retry.go:31] will retry after 1.646709972s: waiting for machine to come up
	I1225 13:26:44.409540 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:44.410023 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:44.410064 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:44.409998 1484550 retry.go:31] will retry after 1.922870398s: waiting for machine to come up
	I1225 13:26:40.190712 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:40.190831 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:40.205624 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:40.690729 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:40.690835 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:40.702671 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.190145 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.190234 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.201991 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.690585 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.690683 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.704041 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.190633 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.190745 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.202086 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.690049 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.690177 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.701556 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.701597 1483118 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:26:42.701611 1483118 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:26:42.701635 1483118 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:26:42.701719 1483118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:42.745733 1483118 cri.go:89] found id: ""
	I1225 13:26:42.745835 1483118 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:26:42.761355 1483118 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:26:42.773734 1483118 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:26:42.773812 1483118 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:42.785213 1483118 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:42.785242 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:42.927378 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:43.715163 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:43.934803 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:44.024379 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:44.106069 1483118 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:26:44.106200 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:44.607243 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:41.582062 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.692062 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.692156 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.698498 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:26:41.709171 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:26:41.719597 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.724562 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.724628 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.730571 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:26:41.740854 1483946 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:26:41.745792 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:26:41.752228 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:26:41.758318 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:26:41.764486 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:26:41.770859 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:26:41.777155 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:26:41.783382 1483946 kubeadm.go:404] StartCluster: {Name:embed-certs-880612 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-880612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:26:41.783493 1483946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:26:41.783557 1483946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:41.827659 1483946 cri.go:89] found id: ""
	I1225 13:26:41.827738 1483946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:26:41.837713 1483946 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:26:41.837740 1483946 kubeadm.go:636] restartCluster start
	I1225 13:26:41.837788 1483946 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:26:41.846668 1483946 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.847773 1483946 kubeconfig.go:92] found "embed-certs-880612" server: "https://192.168.50.179:8443"
	I1225 13:26:41.850105 1483946 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:26:41.859124 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.859196 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.870194 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.359810 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.359906 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.371508 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.860078 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.860167 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.876302 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:43.359657 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:43.359761 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:43.376765 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:43.859950 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:43.860067 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:43.878778 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:44.359355 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:44.359439 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:44.371780 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:44.859294 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:44.859429 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:44.872286 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:45.359315 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:45.359438 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:45.375926 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:45.859453 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:45.859560 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:45.875608 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:46.360180 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:46.360335 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:46.376143 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:46.335832 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:46.336405 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:46.336439 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:46.336342 1484550 retry.go:31] will retry after 2.75487061s: waiting for machine to come up
	I1225 13:26:49.092529 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:49.092962 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:49.092986 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:49.092926 1484550 retry.go:31] will retry after 4.456958281s: waiting for machine to come up
	I1225 13:26:45.106806 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:45.607205 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.106726 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.606675 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.628821 1483118 api_server.go:72] duration metric: took 2.522750929s to wait for apiserver process to appear ...
	I1225 13:26:46.628852 1483118 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:26:46.628878 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:46.629487 1483118 api_server.go:269] stopped: https://192.168.72.232:8443/healthz: Get "https://192.168.72.232:8443/healthz": dial tcp 192.168.72.232:8443: connect: connection refused
	I1225 13:26:47.129325 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:46.860130 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:46.860255 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:46.875574 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:47.360120 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:47.360254 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:47.375470 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:47.860119 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:47.860205 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:47.875015 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:48.359513 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:48.359649 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:48.374270 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:48.859330 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:48.859424 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:48.871789 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:49.359307 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:49.359403 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:49.371339 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:49.859669 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:49.859766 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:49.872882 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.359345 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:50.359455 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:50.370602 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.859148 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:50.859271 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:50.871042 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:51.359459 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:51.359544 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:51.371252 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.824734 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:26:50.824772 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:26:50.824789 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:50.996870 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:50.996923 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:51.129079 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:51.134132 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:51.134169 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:51.629263 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:51.635273 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:51.635305 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:52.129955 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:52.135538 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 200:
	ok
	I1225 13:26:52.144432 1483118 api_server.go:141] control plane version: v1.29.0-rc.2
	I1225 13:26:52.144470 1483118 api_server.go:131] duration metric: took 5.515610636s to wait for apiserver health ...
	I1225 13:26:52.144483 1483118 cni.go:84] Creating CNI manager for ""
	I1225 13:26:52.144491 1483118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:52.146289 1483118 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:26:52.147684 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:26:52.187156 1483118 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:26:52.210022 1483118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:26:52.225137 1483118 system_pods.go:59] 8 kube-system pods found
	I1225 13:26:52.225190 1483118 system_pods.go:61] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:26:52.225200 1483118 system_pods.go:61] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:26:52.225218 1483118 system_pods.go:61] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:26:52.225230 1483118 system_pods.go:61] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:26:52.225239 1483118 system_pods.go:61] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:26:52.225248 1483118 system_pods.go:61] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:26:52.225262 1483118 system_pods.go:61] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:26:52.225272 1483118 system_pods.go:61] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 13:26:52.225288 1483118 system_pods.go:74] duration metric: took 15.241676ms to wait for pod list to return data ...
	I1225 13:26:52.225300 1483118 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:26:52.229429 1483118 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:26:52.229471 1483118 node_conditions.go:123] node cpu capacity is 2
	I1225 13:26:52.229527 1483118 node_conditions.go:105] duration metric: took 4.217644ms to run NodePressure ...
	I1225 13:26:52.229549 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.630596 1483118 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:26:52.635810 1483118 kubeadm.go:787] kubelet initialised
	I1225 13:26:52.635835 1483118 kubeadm.go:788] duration metric: took 5.192822ms waiting for restarted kubelet to initialise ...
	I1225 13:26:52.635844 1483118 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:26:52.645095 1483118 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.652146 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "coredns-76f75df574-pwk9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.652181 1483118 pod_ready.go:81] duration metric: took 7.040805ms waiting for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.652194 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "coredns-76f75df574-pwk9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.652203 1483118 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.658310 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "etcd-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.658347 1483118 pod_ready.go:81] duration metric: took 6.126503ms waiting for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.658359 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "etcd-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.658369 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.663826 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-apiserver-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.663871 1483118 pod_ready.go:81] duration metric: took 5.492644ms waiting for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.663884 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-apiserver-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.663893 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.669098 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.669137 1483118 pod_ready.go:81] duration metric: took 5.230934ms waiting for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.669148 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.669157 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.035736 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-proxy-jbch6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.035782 1483118 pod_ready.go:81] duration metric: took 366.614624ms waiting for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.035796 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-proxy-jbch6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.035806 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.435089 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-scheduler-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.435123 1483118 pod_ready.go:81] duration metric: took 399.30822ms waiting for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.435135 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-scheduler-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.435145 1483118 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.835248 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.835280 1483118 pod_ready.go:81] duration metric: took 400.124904ms waiting for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.835290 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.835299 1483118 pod_ready.go:38] duration metric: took 1.199443126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:26:53.835317 1483118 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:26:53.848912 1483118 ops.go:34] apiserver oom_adj: -16
	I1225 13:26:53.848954 1483118 kubeadm.go:640] restartCluster took 21.184297233s
	I1225 13:26:53.848965 1483118 kubeadm.go:406] StartCluster complete in 21.235197323s
	I1225 13:26:53.849001 1483118 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:53.849140 1483118 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:26:53.851909 1483118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:53.852278 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:26:53.852353 1483118 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:26:53.852461 1483118 addons.go:69] Setting storage-provisioner=true in profile "no-preload-330063"
	I1225 13:26:53.852495 1483118 addons.go:237] Setting addon storage-provisioner=true in "no-preload-330063"
	W1225 13:26:53.852507 1483118 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:26:53.852514 1483118 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:26:53.852555 1483118 addons.go:69] Setting default-storageclass=true in profile "no-preload-330063"
	I1225 13:26:53.852579 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.852607 1483118 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-330063"
	I1225 13:26:53.852871 1483118 addons.go:69] Setting metrics-server=true in profile "no-preload-330063"
	I1225 13:26:53.852895 1483118 addons.go:237] Setting addon metrics-server=true in "no-preload-330063"
	W1225 13:26:53.852904 1483118 addons.go:246] addon metrics-server should already be in state true
	I1225 13:26:53.852948 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.852985 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.852985 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.853012 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.853012 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.853315 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.853361 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.858023 1483118 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-330063" context rescaled to 1 replicas
	I1225 13:26:53.858077 1483118 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:26:53.861368 1483118 out.go:177] * Verifying Kubernetes components...
	I1225 13:26:53.862819 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:26:53.870209 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
	I1225 13:26:53.870486 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I1225 13:26:53.870693 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.870807 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.871066 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I1225 13:26:53.871329 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.871341 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.871426 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.871433 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.871742 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.871770 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.872271 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.872308 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.872511 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.872896 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.872923 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.873167 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.873180 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.873549 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.873693 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.878043 1483118 addons.go:237] Setting addon default-storageclass=true in "no-preload-330063"
	W1225 13:26:53.878077 1483118 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:26:53.878117 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.878613 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.878657 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.891971 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I1225 13:26:53.892418 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.893067 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.893092 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.893461 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.893634 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.895563 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.897916 1483118 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:26:53.896007 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I1225 13:26:53.899799 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:26:53.899823 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:26:53.899858 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.900294 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.900987 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.901006 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.901451 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.901677 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.901677 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I1225 13:26:53.902344 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.902956 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.902981 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.903419 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.903917 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.903986 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.904022 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.904445 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.904452 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.904471 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.904615 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.904785 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.906582 1483118 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:53.551972 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.552449 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Found IP for machine: 192.168.61.39
	I1225 13:26:53.552500 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has current primary IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.552515 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Reserving static IP address...
	I1225 13:26:53.552918 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-344803", mac: "52:54:00:80:85:71", ip: "192.168.61.39"} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.552967 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | skip adding static IP to network mk-default-k8s-diff-port-344803 - found existing host DHCP lease matching {name: "default-k8s-diff-port-344803", mac: "52:54:00:80:85:71", ip: "192.168.61.39"}
	I1225 13:26:53.552990 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Reserved static IP address: 192.168.61.39
	I1225 13:26:53.553003 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for SSH to be available...
	I1225 13:26:53.553041 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Getting to WaitForSSH function...
	I1225 13:26:53.555272 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.555619 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.555654 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.555753 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Using SSH client type: external
	I1225 13:26:53.555785 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa (-rw-------)
	I1225 13:26:53.555828 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:53.555852 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | About to run SSH command:
	I1225 13:26:53.555872 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | exit 0
	I1225 13:26:53.642574 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:53.643094 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetConfigRaw
	I1225 13:26:53.643946 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:53.646842 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.647308 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.647351 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.647580 1484104 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/config.json ...
	I1225 13:26:53.647806 1484104 machine.go:88] provisioning docker machine ...
	I1225 13:26:53.647827 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:53.648054 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.648255 1484104 buildroot.go:166] provisioning hostname "default-k8s-diff-port-344803"
	I1225 13:26:53.648274 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.648485 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.650935 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.651291 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.651327 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.651479 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:53.651718 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.651887 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.652028 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:53.652213 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:53.652587 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:53.652605 1484104 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-344803 && echo "default-k8s-diff-port-344803" | sudo tee /etc/hostname
	I1225 13:26:53.782284 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-344803
	
	I1225 13:26:53.782315 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.785326 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.785631 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.785668 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.785876 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:53.786149 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.786374 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.786600 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:53.786806 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:53.787202 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:53.787222 1484104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-344803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-344803/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-344803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:53.916809 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:53.916844 1484104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:53.916870 1484104 buildroot.go:174] setting up certificates
	I1225 13:26:53.916882 1484104 provision.go:83] configureAuth start
	I1225 13:26:53.916900 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.917233 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:53.920048 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.920377 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.920402 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.920538 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.923177 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.923404 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.923437 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.923584 1484104 provision.go:138] copyHostCerts
	I1225 13:26:53.923666 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:53.923686 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:53.923763 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:53.923934 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:53.923947 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:53.923978 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:53.924078 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:53.924088 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:53.924115 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:53.924207 1484104 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-344803 san=[192.168.61.39 192.168.61.39 localhost 127.0.0.1 minikube default-k8s-diff-port-344803]
	I1225 13:26:54.014673 1484104 provision.go:172] copyRemoteCerts
	I1225 13:26:54.014739 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:54.014772 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.018361 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.018849 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.018924 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.019089 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.019351 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.019559 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.019949 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.120711 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:54.155907 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1225 13:26:54.192829 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 13:26:54.227819 1484104 provision.go:86] duration metric: configureAuth took 310.912829ms
	I1225 13:26:54.227853 1484104 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:54.228119 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:26:54.228236 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.232535 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.232580 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.232628 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.232945 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.233215 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.233422 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.233608 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.233801 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:54.234295 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:54.234322 1484104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:54.638656 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:54.638772 1484104 machine.go:91] provisioned docker machine in 990.950916ms
	I1225 13:26:54.638798 1484104 start.go:300] post-start starting for "default-k8s-diff-port-344803" (driver="kvm2")
	I1225 13:26:54.638821 1484104 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:54.638883 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.639341 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:54.639383 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.643369 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.643810 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.643863 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.644140 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.644444 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.644624 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.644774 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.740189 1484104 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:54.745972 1484104 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:54.746009 1484104 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:54.746104 1484104 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:54.746229 1484104 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:54.746362 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:54.758199 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:54.794013 1484104 start.go:303] post-start completed in 155.186268ms
	I1225 13:26:54.794048 1484104 fix.go:56] fixHost completed within 20.354368879s
	I1225 13:26:54.794077 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.797620 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.798092 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.798129 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.798423 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.798692 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.798900 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.799067 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.799293 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:54.799807 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:54.799829 1484104 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:54.933026 1482618 start.go:369] acquired machines lock for "old-k8s-version-198979" in 59.553202424s
	I1225 13:26:54.933097 1482618 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:54.933105 1482618 fix.go:54] fixHost starting: 
	I1225 13:26:54.933577 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:54.933620 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:54.956206 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I1225 13:26:54.956801 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:54.958396 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:26:54.958425 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:54.958887 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:54.959164 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:26:54.959384 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:26:54.961270 1482618 fix.go:102] recreateIfNeeded on old-k8s-version-198979: state=Stopped err=<nil>
	I1225 13:26:54.961305 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	W1225 13:26:54.961494 1482618 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:54.963775 1482618 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-198979" ...
	I1225 13:26:53.904908 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.908114 1483118 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:26:53.908130 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:26:53.908147 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.908370 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:53.912254 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.912861 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.912885 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.913096 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.913324 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.913510 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.913629 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:53.959638 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
	I1225 13:26:53.960190 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.960890 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.960913 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.961320 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.961603 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.963927 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.964240 1483118 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:26:53.964262 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:26:53.964282 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.967614 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.968092 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.968155 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.968471 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.968679 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.968879 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.969040 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:54.064639 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:26:54.064674 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:26:54.093609 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:26:54.147415 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:26:54.147449 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:26:54.148976 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:26:54.160381 1483118 node_ready.go:35] waiting up to 6m0s for node "no-preload-330063" to be "Ready" ...
	I1225 13:26:54.160490 1483118 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:26:54.202209 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:26:54.202242 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:26:54.276251 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:26:54.965270 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Start
	I1225 13:26:54.965680 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring networks are active...
	I1225 13:26:54.966477 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring network default is active
	I1225 13:26:54.966919 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring network mk-old-k8s-version-198979 is active
	I1225 13:26:54.967420 1482618 main.go:141] libmachine: (old-k8s-version-198979) Getting domain xml...
	I1225 13:26:54.968585 1482618 main.go:141] libmachine: (old-k8s-version-198979) Creating domain...
	I1225 13:26:55.590940 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.497275379s)
	I1225 13:26:55.591005 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591020 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.591108 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.442107411s)
	I1225 13:26:55.591127 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591136 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.591247 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.314957717s)
	I1225 13:26:55.591268 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591280 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.595765 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.595838 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.595847 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.595859 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.595867 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596016 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596049 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596058 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596067 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.596075 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596177 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596218 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596226 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596236 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.596244 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596485 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596515 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596929 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596972 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596979 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596990 1483118 addons.go:473] Verifying addon metrics-server=true in "no-preload-330063"
	I1225 13:26:55.597032 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.597067 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.597076 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.610755 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.610788 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.611238 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.611264 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.613767 1483118 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1225 13:26:51.859989 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:51.860081 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:51.871647 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:51.871684 1483946 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:26:51.871709 1483946 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:26:51.871725 1483946 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:26:51.871817 1483946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:51.919587 1483946 cri.go:89] found id: ""
	I1225 13:26:51.919706 1483946 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:26:51.935341 1483946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:26:51.944522 1483946 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:26:51.944588 1483946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:51.954607 1483946 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:51.954637 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.092831 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.921485 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.161902 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.270786 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.340226 1483946 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:26:53.340331 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:53.841309 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:54.341486 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:54.841104 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.341409 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.841238 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.867371 1483946 api_server.go:72] duration metric: took 2.52714535s to wait for apiserver process to appear ...
	I1225 13:26:55.867406 1483946 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:26:55.867434 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:26:55.867970 1483946 api_server.go:269] stopped: https://192.168.50.179:8443/healthz: Get "https://192.168.50.179:8443/healthz": dial tcp 192.168.50.179:8443: connect: connection refused
	I1225 13:26:56.368335 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:26:54.932810 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510814.876127642
	
	I1225 13:26:54.932838 1484104 fix.go:206] guest clock: 1703510814.876127642
	I1225 13:26:54.932848 1484104 fix.go:219] Guest: 2023-12-25 13:26:54.876127642 +0000 UTC Remote: 2023-12-25 13:26:54.794053361 +0000 UTC m=+104.977714576 (delta=82.074281ms)
	I1225 13:26:54.932878 1484104 fix.go:190] guest clock delta is within tolerance: 82.074281ms
	I1225 13:26:54.932885 1484104 start.go:83] releasing machines lock for "default-k8s-diff-port-344803", held for 20.493256775s
	I1225 13:26:54.932920 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.933380 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:54.936626 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.937209 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.937262 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.937534 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938366 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938583 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938676 1484104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:54.938722 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.938826 1484104 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:54.938854 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.942392 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.942792 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.942831 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.943292 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.943487 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.943635 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.943764 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.943922 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.944870 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.945020 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.945066 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.945318 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.945498 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.945743 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:55.069674 1484104 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:55.078333 1484104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:55.247706 1484104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:55.256782 1484104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:55.256885 1484104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:55.278269 1484104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:55.278303 1484104 start.go:475] detecting cgroup driver to use...
	I1225 13:26:55.278383 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:55.302307 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:55.322161 1484104 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:55.322345 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:55.342241 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:55.361128 1484104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:55.547880 1484104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:55.693711 1484104 docker.go:219] disabling docker service ...
	I1225 13:26:55.693804 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:55.708058 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:55.721136 1484104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:55.890044 1484104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:56.042549 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:56.061359 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:56.086075 1484104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:56.086169 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.100059 1484104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:56.100162 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.113858 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.127589 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.140964 1484104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:56.155180 1484104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:56.167585 1484104 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:56.167716 1484104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:56.186467 1484104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:56.200044 1484104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:56.339507 1484104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:56.563294 1484104 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:56.563385 1484104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:56.570381 1484104 start.go:543] Will wait 60s for crictl version
	I1225 13:26:56.570477 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:26:56.575675 1484104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:56.617219 1484104 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:56.617322 1484104 ssh_runner.go:195] Run: crio --version
	I1225 13:26:56.679138 1484104 ssh_runner.go:195] Run: crio --version
	I1225 13:26:56.751125 1484104 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 13:26:56.752677 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:56.756612 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:56.757108 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:56.757142 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:56.757502 1484104 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:56.763739 1484104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:56.781952 1484104 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:26:56.782029 1484104 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:56.840852 1484104 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 13:26:56.840939 1484104 ssh_runner.go:195] Run: which lz4
	I1225 13:26:56.845412 1484104 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:26:56.850135 1484104 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:26:56.850181 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 13:26:58.731034 1484104 crio.go:444] Took 1.885656 seconds to copy over tarball
	I1225 13:26:58.731138 1484104 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:26:55.615056 1483118 addons.go:508] enable addons completed in 1.762702944s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1225 13:26:56.169115 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:58.665700 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:56.860066 1482618 main.go:141] libmachine: (old-k8s-version-198979) Waiting to get IP...
	I1225 13:26:56.860987 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:56.861644 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:56.861765 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:56.861626 1484760 retry.go:31] will retry after 198.102922ms: waiting for machine to come up
	I1225 13:26:57.061281 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.062001 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.062029 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.061907 1484760 retry.go:31] will retry after 299.469436ms: waiting for machine to come up
	I1225 13:26:57.362874 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.363385 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.363441 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.363363 1484760 retry.go:31] will retry after 460.796393ms: waiting for machine to come up
	I1225 13:26:57.826330 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.827065 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.827098 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.827021 1484760 retry.go:31] will retry after 397.690798ms: waiting for machine to come up
	I1225 13:26:58.226942 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:58.227490 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:58.227528 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:58.227437 1484760 retry.go:31] will retry after 731.798943ms: waiting for machine to come up
	I1225 13:26:58.960490 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:58.960969 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:58.961000 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:58.960930 1484760 retry.go:31] will retry after 577.614149ms: waiting for machine to come up
	I1225 13:26:59.540871 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:59.541581 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:59.541607 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:59.541494 1484760 retry.go:31] will retry after 1.177902051s: waiting for machine to come up
	I1225 13:27:00.799310 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.799355 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:00.799376 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:00.905272 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.905311 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:00.905330 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:00.922285 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.922324 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:01.367590 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:01.374093 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:01.374155 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.440592 1484104 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.709419632s)
	I1225 13:27:02.440624 1484104 crio.go:451] Took 3.709555 seconds to extract the tarball
	I1225 13:27:02.440636 1484104 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:27:02.504136 1484104 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:02.613720 1484104 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 13:27:02.613752 1484104 cache_images.go:84] Images are preloaded, skipping loading
	I1225 13:27:02.613839 1484104 ssh_runner.go:195] Run: crio config
	I1225 13:27:02.685414 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:27:02.685436 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:02.685459 1484104 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:27:02.685477 1484104 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.39 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-344803 NodeName:default-k8s-diff-port-344803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:27:02.685627 1484104 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.39
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-344803"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:27:02.685710 1484104 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-344803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1225 13:27:02.685778 1484104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 13:27:02.696327 1484104 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:27:02.696420 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:27:02.707369 1484104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1225 13:27:02.728181 1484104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:27:02.748934 1484104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1225 13:27:02.770783 1484104 ssh_runner.go:195] Run: grep 192.168.61.39	control-plane.minikube.internal$ /etc/hosts
	I1225 13:27:02.775946 1484104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:02.790540 1484104 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803 for IP: 192.168.61.39
	I1225 13:27:02.790590 1484104 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:02.790792 1484104 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:27:02.790862 1484104 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:27:02.790961 1484104 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.key
	I1225 13:27:02.859647 1484104 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.key.daee23f3
	I1225 13:27:02.859773 1484104 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.key
	I1225 13:27:02.859934 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:27:02.859993 1484104 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:27:02.860010 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:27:02.860037 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:27:02.860061 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:27:02.860082 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:27:02.860121 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:02.860871 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:27:02.889354 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1225 13:27:02.916983 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:27:02.943348 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:27:02.969940 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:27:02.996224 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:27:03.021662 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:27:03.052589 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:27:03.080437 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:27:03.107973 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:27:03.134921 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:27:03.161948 1484104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:27:03.184606 1484104 ssh_runner.go:195] Run: openssl version
	I1225 13:27:03.192305 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:27:03.204868 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.209793 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.209895 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.216568 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:27:03.229131 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:27:03.241634 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.247328 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.247397 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.253730 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:27:03.267063 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:27:03.281957 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.288393 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.288481 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.295335 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:27:03.307900 1484104 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:27:03.313207 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:27:03.319949 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:27:03.327223 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:27:03.333927 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:27:03.341434 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:27:03.349298 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:27:03.356303 1484104 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:27:03.356409 1484104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:27:03.356463 1484104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:03.407914 1484104 cri.go:89] found id: ""
	I1225 13:27:03.407991 1484104 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:27:03.418903 1484104 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:27:03.418928 1484104 kubeadm.go:636] restartCluster start
	I1225 13:27:03.418981 1484104 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:27:03.429758 1484104 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:03.431242 1484104 kubeconfig.go:92] found "default-k8s-diff-port-344803" server: "https://192.168.61.39:8444"
	I1225 13:27:03.433847 1484104 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:27:03.443564 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:03.443648 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:03.457188 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:03.943692 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:03.943806 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:03.956490 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:04.443680 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:04.443781 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:04.464817 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:00.671397 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:27:01.665347 1483118 node_ready.go:49] node "no-preload-330063" has status "Ready":"True"
	I1225 13:27:01.665383 1483118 node_ready.go:38] duration metric: took 7.504959726s waiting for node "no-preload-330063" to be "Ready" ...
	I1225 13:27:01.665398 1483118 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:01.675515 1483118 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.688377 1483118 pod_ready.go:92] pod "coredns-76f75df574-pwk9h" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:01.688467 1483118 pod_ready.go:81] duration metric: took 12.819049ms waiting for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.688492 1483118 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:03.697007 1483118 pod_ready.go:102] pod "etcd-no-preload-330063" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:04.379595 1483118 pod_ready.go:92] pod "etcd-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.379628 1483118 pod_ready.go:81] duration metric: took 2.691119222s waiting for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.379643 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.393427 1483118 pod_ready.go:92] pod "kube-apiserver-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.393459 1483118 pod_ready.go:81] duration metric: took 13.806505ms waiting for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.393473 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.454291 1483118 pod_ready.go:92] pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.454387 1483118 pod_ready.go:81] duration metric: took 60.903507ms waiting for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.454417 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.525436 1483118 pod_ready.go:92] pod "kube-proxy-jbch6" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.525471 1483118 pod_ready.go:81] duration metric: took 71.040817ms waiting for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.525486 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.546670 1483118 pod_ready.go:92] pod "kube-scheduler-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.546709 1483118 pod_ready.go:81] duration metric: took 21.213348ms waiting for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.546726 1483118 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.868308 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:01.913335 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:01.913393 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.367660 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:02.375382 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:02.375424 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.867590 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:02.873638 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:02.873680 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:03.368014 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:03.377785 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:03.377827 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:03.867933 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:03.873979 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:03.874013 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:04.367576 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:04.377835 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:04.377884 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:04.868444 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:04.879138 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:04.879187 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:05.367519 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:05.377570 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 200:
	ok
	I1225 13:27:05.388572 1483946 api_server.go:141] control plane version: v1.28.4
	I1225 13:27:05.388605 1483946 api_server.go:131] duration metric: took 9.521192442s to wait for apiserver health ...
	I1225 13:27:05.388615 1483946 cni.go:84] Creating CNI manager for ""
	I1225 13:27:05.388625 1483946 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:05.390592 1483946 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:00.720918 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:00.721430 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:00.721457 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:00.721380 1484760 retry.go:31] will retry after 931.125211ms: waiting for machine to come up
	I1225 13:27:01.654661 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:01.655341 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:01.655367 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:01.655263 1484760 retry.go:31] will retry after 1.333090932s: waiting for machine to come up
	I1225 13:27:02.991018 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:02.991520 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:02.991555 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:02.991468 1484760 retry.go:31] will retry after 2.006684909s: waiting for machine to come up
	I1225 13:27:05.000424 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:05.000972 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:05.001023 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:05.000908 1484760 retry.go:31] will retry after 2.72499386s: waiting for machine to come up
	I1225 13:27:05.391952 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:05.406622 1483946 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:05.429599 1483946 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:05.441614 1483946 system_pods.go:59] 9 kube-system pods found
	I1225 13:27:05.441681 1483946 system_pods.go:61] "coredns-5dd5756b68-4jqz4" [026524a6-1f73-4644-8a80-b276326178b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:05.441698 1483946 system_pods.go:61] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:05.441710 1483946 system_pods.go:61] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:27:05.441721 1483946 system_pods.go:61] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:27:05.441732 1483946 system_pods.go:61] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:27:05.441746 1483946 system_pods.go:61] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:27:05.441758 1483946 system_pods.go:61] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:27:05.441773 1483946 system_pods.go:61] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:27:05.441790 1483946 system_pods.go:61] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:27:05.441812 1483946 system_pods.go:74] duration metric: took 12.174684ms to wait for pod list to return data ...
	I1225 13:27:05.441824 1483946 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:05.447018 1483946 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:05.447064 1483946 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:05.447079 1483946 node_conditions.go:105] duration metric: took 5.247366ms to run NodePressure ...
	I1225 13:27:05.447106 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:05.767972 1483946 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:05.774281 1483946 kubeadm.go:787] kubelet initialised
	I1225 13:27:05.774307 1483946 kubeadm.go:788] duration metric: took 6.300121ms waiting for restarted kubelet to initialise ...
	I1225 13:27:05.774316 1483946 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:05.781474 1483946 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.789698 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.789732 1483946 pod_ready.go:81] duration metric: took 8.22748ms waiting for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.789746 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.789758 1483946 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.798517 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.798584 1483946 pod_ready.go:81] duration metric: took 8.811967ms waiting for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.798601 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.798612 1483946 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.804958 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "etcd-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.804998 1483946 pod_ready.go:81] duration metric: took 6.356394ms waiting for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.805018 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "etcd-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.805028 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.834502 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.834549 1483946 pod_ready.go:81] duration metric: took 29.510044ms waiting for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.834561 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.834571 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:06.234676 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.234728 1483946 pod_ready.go:81] duration metric: took 400.145957ms waiting for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:06.234742 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.234752 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:06.634745 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-proxy-677d7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.634785 1483946 pod_ready.go:81] duration metric: took 400.019189ms waiting for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:06.634798 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-proxy-677d7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.634807 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:07.034762 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.034793 1483946 pod_ready.go:81] duration metric: took 399.977148ms waiting for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:07.034803 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.034810 1483946 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:07.433932 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.433969 1483946 pod_ready.go:81] duration metric: took 399.14889ms waiting for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:07.433982 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.433992 1483946 pod_ready.go:38] duration metric: took 1.659666883s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:07.434016 1483946 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:27:07.448377 1483946 ops.go:34] apiserver oom_adj: -16
	I1225 13:27:07.448405 1483946 kubeadm.go:640] restartCluster took 25.610658268s
	I1225 13:27:07.448415 1483946 kubeadm.go:406] StartCluster complete in 25.665045171s
	I1225 13:27:07.448443 1483946 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:07.448530 1483946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:27:07.451369 1483946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:07.453102 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:27:07.453244 1483946 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:27:07.453332 1483946 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880612"
	I1225 13:27:07.453351 1483946 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-880612"
	W1225 13:27:07.453363 1483946 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:27:07.453432 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.453450 1483946 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:27:07.453516 1483946 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880612"
	I1225 13:27:07.453536 1483946 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880612"
	I1225 13:27:07.453860 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.453870 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.453902 1483946 addons.go:69] Setting metrics-server=true in profile "embed-certs-880612"
	I1225 13:27:07.453917 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.453925 1483946 addons.go:237] Setting addon metrics-server=true in "embed-certs-880612"
	W1225 13:27:07.454160 1483946 addons.go:246] addon metrics-server should already be in state true
	I1225 13:27:07.454211 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.453903 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.454601 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.454669 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.476508 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I1225 13:27:07.476720 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I1225 13:27:07.477202 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.477210 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.477794 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.477815 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.477957 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.477971 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.478407 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.478478 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.479041 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.479083 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.480350 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.483762 1483946 addons.go:237] Setting addon default-storageclass=true in "embed-certs-880612"
	W1225 13:27:07.483783 1483946 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:27:07.483816 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.484249 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.484285 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.489369 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I1225 13:27:07.489817 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.490332 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.490354 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.491339 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.494037 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.494083 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.501003 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I1225 13:27:07.501737 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.502399 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.502422 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.502882 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.503092 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.505387 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.507725 1483946 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:07.509099 1483946 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:07.509121 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:27:07.509153 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.513153 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.513923 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.513957 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.514226 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.514426 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.514610 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.515190 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.516933 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38615
	I1225 13:27:07.517681 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.518194 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.518220 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.518784 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1225 13:27:07.519309 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.519400 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.519930 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.519956 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.520525 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.520573 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.520819 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.521050 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.523074 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.525265 1483946 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:27:07.526542 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:27:07.526569 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:27:07.526598 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.530316 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.530846 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.530883 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.531223 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.531571 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.531832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.532070 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.544917 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I1225 13:27:07.545482 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.546037 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.546059 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.546492 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.546850 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.548902 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.549177 1483946 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:07.549196 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:27:07.549218 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.553036 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.553541 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.553572 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.553784 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.554642 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.554893 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.555581 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.676244 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:07.704310 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:07.718012 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:27:07.718043 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:27:07.779041 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:27:07.779073 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:27:07.786154 1483946 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:27:07.812338 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:07.812373 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:27:07.837795 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:07.974099 1483946 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-880612" context rescaled to 1 replicas
	I1225 13:27:07.974158 1483946 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:27:07.977116 1483946 out.go:177] * Verifying Kubernetes components...
	I1225 13:27:07.978618 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:27:09.163988 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.459630406s)
	I1225 13:27:09.164059 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164073 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164091 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.487803106s)
	I1225 13:27:09.164129 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164149 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164617 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.164624 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.164629 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.164639 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.164641 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.164651 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164653 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164661 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164666 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164622 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165025 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165095 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.165121 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.165172 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.165186 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.188483 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.188510 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.188847 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.188898 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.188906 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.193684 1483946 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.215023208s)
	I1225 13:27:09.193736 1483946 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880612" to be "Ready" ...
	I1225 13:27:09.193789 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.355953438s)
	I1225 13:27:09.193825 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.193842 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.194176 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.194192 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.194208 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.194219 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.195998 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.196000 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.196033 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.196044 1483946 addons.go:473] Verifying addon metrics-server=true in "embed-certs-880612"
	I1225 13:27:09.198211 1483946 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:27:04.943819 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:04.943958 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:04.960056 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:05.443699 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:05.443795 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:05.461083 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:05.943713 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:05.943821 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:05.960712 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.444221 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:06.444305 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:06.458894 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.944546 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:06.944630 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:06.958754 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:07.444332 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:07.444462 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:07.491468 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:07.943982 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:07.944135 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:07.960697 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:08.444285 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:08.444408 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:08.461209 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:08.943720 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:08.943866 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:08.959990 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:09.444604 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:09.444727 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:09.463020 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.556605 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:08.560748 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:07.728505 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:07.728994 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:07.729023 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:07.728936 1484760 retry.go:31] will retry after 2.39810797s: waiting for machine to come up
	I1225 13:27:10.129402 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:10.129925 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:10.129960 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:10.129860 1484760 retry.go:31] will retry after 4.278491095s: waiting for machine to come up
	I1225 13:27:09.199531 1483946 addons.go:508] enable addons completed in 1.746293071s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:27:11.199503 1483946 node_ready.go:49] node "embed-certs-880612" has status "Ready":"True"
	I1225 13:27:11.199529 1483946 node_ready.go:38] duration metric: took 2.005779632s waiting for node "embed-certs-880612" to be "Ready" ...
	I1225 13:27:11.199541 1483946 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:11.207447 1483946 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:09.943841 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:09.943948 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:09.960478 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:10.444037 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:10.444309 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:10.463480 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:10.943760 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:10.943886 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:10.960191 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:11.444602 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:11.444702 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:11.458181 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:11.943674 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:11.943783 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:11.956418 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:12.443719 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:12.443835 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:12.456707 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:12.944332 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:12.944434 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:12.957217 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:13.443965 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:13.444076 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:13.455968 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:13.456008 1484104 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:27:13.456051 1484104 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:27:13.456067 1484104 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:27:13.456145 1484104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:13.497063 1484104 cri.go:89] found id: ""
	I1225 13:27:13.497135 1484104 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:27:13.513279 1484104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:27:13.522816 1484104 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:27:13.522885 1484104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:13.532580 1484104 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:13.532612 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:13.668876 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:14.848056 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.179140695s)
	I1225 13:27:14.848090 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:11.072420 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:13.555685 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:14.413456 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:14.414013 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:14.414043 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:14.413960 1484760 retry.go:31] will retry after 4.470102249s: waiting for machine to come up
	I1225 13:27:11.714710 1483946 pod_ready.go:92] pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.714747 1483946 pod_ready.go:81] duration metric: took 507.263948ms waiting for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.714760 1483946 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.720448 1483946 pod_ready.go:92] pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.720472 1483946 pod_ready.go:81] duration metric: took 5.705367ms waiting for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.720481 1483946 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.725691 1483946 pod_ready.go:92] pod "etcd-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.725717 1483946 pod_ready.go:81] duration metric: took 5.229718ms waiting for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.725725 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.238949 1483946 pod_ready.go:92] pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.238979 1483946 pod_ready.go:81] duration metric: took 1.513246575s waiting for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.238992 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.244957 1483946 pod_ready.go:92] pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.244980 1483946 pod_ready.go:81] duration metric: took 5.981457ms waiting for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.244991 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.609255 1483946 pod_ready.go:92] pod "kube-proxy-677d7" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.609282 1483946 pod_ready.go:81] duration metric: took 364.285426ms waiting for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.609292 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.621505 1483946 pod_ready.go:92] pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:15.621540 1483946 pod_ready.go:81] duration metric: took 2.012239726s waiting for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.621553 1483946 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.047153 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:15.142405 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:15.237295 1484104 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:27:15.237406 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:15.737788 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:16.238003 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:16.738328 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:17.238494 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:17.738177 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:18.237676 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:18.259279 1484104 api_server.go:72] duration metric: took 3.021983877s to wait for apiserver process to appear ...
	I1225 13:27:18.259305 1484104 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:27:18.259331 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:15.555810 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:18.056361 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:18.888547 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.889138 1482618 main.go:141] libmachine: (old-k8s-version-198979) Found IP for machine: 192.168.39.186
	I1225 13:27:18.889167 1482618 main.go:141] libmachine: (old-k8s-version-198979) Reserving static IP address...
	I1225 13:27:18.889183 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has current primary IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.889631 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "old-k8s-version-198979", mac: "52:54:00:a1:03:69", ip: "192.168.39.186"} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.889672 1482618 main.go:141] libmachine: (old-k8s-version-198979) Reserved static IP address: 192.168.39.186
	I1225 13:27:18.889702 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | skip adding static IP to network mk-old-k8s-version-198979 - found existing host DHCP lease matching {name: "old-k8s-version-198979", mac: "52:54:00:a1:03:69", ip: "192.168.39.186"}
	I1225 13:27:18.889724 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Getting to WaitForSSH function...
	I1225 13:27:18.889741 1482618 main.go:141] libmachine: (old-k8s-version-198979) Waiting for SSH to be available...
	I1225 13:27:18.892133 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.892475 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.892509 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.892626 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Using SSH client type: external
	I1225 13:27:18.892658 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa (-rw-------)
	I1225 13:27:18.892688 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:27:18.892703 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | About to run SSH command:
	I1225 13:27:18.892722 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | exit 0
	I1225 13:27:18.991797 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | SSH cmd err, output: <nil>: 
	I1225 13:27:18.992203 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetConfigRaw
	I1225 13:27:18.992943 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:18.996016 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.996344 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.996416 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.996762 1482618 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/config.json ...
	I1225 13:27:18.996990 1482618 machine.go:88] provisioning docker machine ...
	I1225 13:27:18.997007 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:18.997254 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:18.997454 1482618 buildroot.go:166] provisioning hostname "old-k8s-version-198979"
	I1225 13:27:18.997483 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:18.997670 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.000725 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.001114 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.001144 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.001332 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.001504 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.001686 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.001836 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.002039 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.002592 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.002614 1482618 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-198979 && echo "old-k8s-version-198979" | sudo tee /etc/hostname
	I1225 13:27:19.148260 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-198979
	
	I1225 13:27:19.148291 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.151692 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.152160 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.152196 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.152350 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.152566 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.152743 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.152941 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.153133 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.153647 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.153678 1482618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-198979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-198979/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-198979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:27:19.294565 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:27:19.294606 1482618 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:27:19.294635 1482618 buildroot.go:174] setting up certificates
	I1225 13:27:19.294649 1482618 provision.go:83] configureAuth start
	I1225 13:27:19.294663 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:19.295039 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:19.298511 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.298933 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.298971 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.299137 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.302045 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.302486 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.302520 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.302682 1482618 provision.go:138] copyHostCerts
	I1225 13:27:19.302777 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:27:19.302806 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:27:19.302869 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:27:19.302994 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:27:19.303012 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:27:19.303042 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:27:19.303103 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:27:19.303113 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:27:19.303131 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:27:19.303177 1482618 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-198979 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube old-k8s-version-198979]
	I1225 13:27:19.444049 1482618 provision.go:172] copyRemoteCerts
	I1225 13:27:19.444142 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:27:19.444180 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.447754 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.448141 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.448174 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.448358 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.448593 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.448818 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.448994 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:19.545298 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:27:19.576678 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:27:19.604520 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1225 13:27:19.631640 1482618 provision.go:86] duration metric: configureAuth took 336.975454ms
	I1225 13:27:19.631674 1482618 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:27:19.631899 1482618 config.go:182] Loaded profile config "old-k8s-version-198979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1225 13:27:19.632012 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.635618 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.636130 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.636166 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.636644 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.636903 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.637088 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.637315 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.637511 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.638005 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.638040 1482618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:27:19.990807 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:27:19.990844 1482618 machine.go:91] provisioned docker machine in 993.840927ms
	I1225 13:27:19.990857 1482618 start.go:300] post-start starting for "old-k8s-version-198979" (driver="kvm2")
	I1225 13:27:19.990870 1482618 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:27:19.990908 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:19.991349 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:27:19.991388 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.994622 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.994980 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.995015 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.995147 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.995402 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.995574 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.995713 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.089652 1482618 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:27:20.094575 1482618 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:27:20.094611 1482618 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:27:20.094716 1482618 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:27:20.094856 1482618 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:27:20.095010 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:27:20.105582 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:20.133802 1482618 start.go:303] post-start completed in 142.928836ms
	I1225 13:27:20.133830 1482618 fix.go:56] fixHost completed within 25.200724583s
	I1225 13:27:20.133860 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.137215 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.137635 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.137670 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.137839 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.138081 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.138322 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.138518 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.138732 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:20.139194 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:20.139228 1482618 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:27:20.268572 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510840.203941272
	
	I1225 13:27:20.268602 1482618 fix.go:206] guest clock: 1703510840.203941272
	I1225 13:27:20.268613 1482618 fix.go:219] Guest: 2023-12-25 13:27:20.203941272 +0000 UTC Remote: 2023-12-25 13:27:20.133835417 +0000 UTC m=+384.781536006 (delta=70.105855ms)
	I1225 13:27:20.268641 1482618 fix.go:190] guest clock delta is within tolerance: 70.105855ms
	I1225 13:27:20.268651 1482618 start.go:83] releasing machines lock for "old-k8s-version-198979", held for 25.335582747s
	I1225 13:27:20.268683 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.268981 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:20.272181 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.272626 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.272666 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.272948 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273612 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273851 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273925 1482618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:27:20.273990 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.274108 1482618 ssh_runner.go:195] Run: cat /version.json
	I1225 13:27:20.274133 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.277090 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277381 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277568 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.277608 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277839 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.278041 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.278066 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.278085 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.278284 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.278293 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.278500 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.278516 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.278691 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.278852 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.395858 1482618 ssh_runner.go:195] Run: systemctl --version
	I1225 13:27:20.403417 1482618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:27:17.629846 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:19.635250 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:20.559485 1482618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:27:20.566356 1482618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:27:20.566487 1482618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:27:20.584531 1482618 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:27:20.584565 1482618 start.go:475] detecting cgroup driver to use...
	I1225 13:27:20.584648 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:27:20.599889 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:27:20.613197 1482618 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:27:20.613278 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:27:20.626972 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:27:20.640990 1482618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:27:20.752941 1482618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:27:20.886880 1482618 docker.go:219] disabling docker service ...
	I1225 13:27:20.886971 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:27:20.903143 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:27:20.919083 1482618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:27:21.042116 1482618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:27:21.171997 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:27:21.185237 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:27:21.204711 1482618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1225 13:27:21.204787 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.215196 1482618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:27:21.215276 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.226411 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.239885 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.250576 1482618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:27:21.263723 1482618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:27:21.274356 1482618 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:27:21.274462 1482618 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:27:21.288126 1482618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:27:21.300772 1482618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:27:21.467651 1482618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:27:21.700509 1482618 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:27:21.700618 1482618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:27:21.708118 1482618 start.go:543] Will wait 60s for crictl version
	I1225 13:27:21.708207 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:21.712687 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:27:21.768465 1482618 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:27:21.768563 1482618 ssh_runner.go:195] Run: crio --version
	I1225 13:27:21.836834 1482618 ssh_runner.go:195] Run: crio --version
	I1225 13:27:21.907627 1482618 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1225 13:27:21.288635 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:21.288669 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:21.288685 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:21.374966 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:21.375010 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:21.760268 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:21.771864 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:21.771898 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:22.259417 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:22.271720 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:22.271779 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:22.760217 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:22.767295 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:22.767333 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:23.259377 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:23.265348 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 200:
	ok
	I1225 13:27:23.275974 1484104 api_server.go:141] control plane version: v1.28.4
	I1225 13:27:23.276010 1484104 api_server.go:131] duration metric: took 5.01669783s to wait for apiserver health ...
	I1225 13:27:23.276024 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:27:23.276033 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:23.278354 1484104 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:23.279804 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:23.300762 1484104 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:23.326548 1484104 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:23.346826 1484104 system_pods.go:59] 8 kube-system pods found
	I1225 13:27:23.346871 1484104 system_pods.go:61] "coredns-5dd5756b68-l7qnn" [860c88a5-5bb9-4556-814a-08f1cc882c0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:23.346884 1484104 system_pods.go:61] "etcd-default-k8s-diff-port-344803" [eca3b322-fbba-4d8e-b8be-10b7f552bd32] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:27:23.346896 1484104 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-344803" [730b8b80-bf80-4769-b4cd-7e81b0600599] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:27:23.346908 1484104 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-344803" [8424df4f-e2d8-4f22-8593-21cf0ccc82eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:27:23.346965 1484104 system_pods.go:61] "kube-proxy-wnjn2" [ed9e8d7e-d237-46ab-84d1-a78f7f931aab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:27:23.346988 1484104 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-344803" [f865e5a4-4b21-4d15-a437-47965f0d1db8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:27:23.347009 1484104 system_pods.go:61] "metrics-server-57f55c9bc5-zgrj5" [d52789c5-dfe7-48e6-9dfd-a7dc5b5be6ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:27:23.347099 1484104 system_pods.go:61] "storage-provisioner" [96723fff-956b-42c4-864b-b18afb0c0285] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 13:27:23.347116 1484104 system_pods.go:74] duration metric: took 20.540773ms to wait for pod list to return data ...
	I1225 13:27:23.347135 1484104 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:23.358619 1484104 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:23.358673 1484104 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:23.358690 1484104 node_conditions.go:105] duration metric: took 11.539548ms to run NodePressure ...
	I1225 13:27:23.358716 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:23.795558 1484104 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:23.804103 1484104 kubeadm.go:787] kubelet initialised
	I1225 13:27:23.804125 1484104 kubeadm.go:788] duration metric: took 8.535185ms waiting for restarted kubelet to initialise ...
	I1225 13:27:23.804133 1484104 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:23.814199 1484104 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:20.557056 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:22.569215 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:25.054111 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:21.909021 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:21.912423 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:21.912802 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:21.912828 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:21.913199 1482618 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 13:27:21.917615 1482618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:21.931709 1482618 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1225 13:27:21.931830 1482618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:21.991133 1482618 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1225 13:27:21.991246 1482618 ssh_runner.go:195] Run: which lz4
	I1225 13:27:21.997721 1482618 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:27:22.003171 1482618 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:27:22.003218 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1225 13:27:23.975639 1482618 crio.go:444] Took 1.977982 seconds to copy over tarball
	I1225 13:27:23.975723 1482618 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:27:21.643721 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:24.132742 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:25.827617 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:28.322507 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:27.055526 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:29.558580 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:27.243294 1482618 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.267535049s)
	I1225 13:27:27.243339 1482618 crio.go:451] Took 3.267670 seconds to extract the tarball
	I1225 13:27:27.243368 1482618 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:27:27.285528 1482618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:27.338914 1482618 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1225 13:27:27.338948 1482618 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1225 13:27:27.339078 1482618 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.339115 1482618 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.339118 1482618 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.339160 1482618 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.339114 1482618 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.339054 1482618 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.339059 1482618 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1225 13:27:27.339060 1482618 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.340631 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.340647 1482618 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.340658 1482618 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.340632 1482618 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1225 13:27:27.340630 1482618 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.340666 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.340630 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.340635 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.502560 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.502567 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.510502 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1225 13:27:27.513052 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.518668 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.522882 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.553027 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.608178 1482618 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1225 13:27:27.608235 1482618 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.608294 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.655271 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.671173 1482618 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1225 13:27:27.671223 1482618 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1225 13:27:27.671283 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.671290 1482618 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1225 13:27:27.671330 1482618 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.671378 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.728043 1482618 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1225 13:27:27.728102 1482618 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.728139 1482618 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1225 13:27:27.728159 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.728187 1482618 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.728222 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.739034 1482618 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1225 13:27:27.739077 1482618 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.739133 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.739156 1482618 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1225 13:27:27.739205 1482618 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.739213 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.739261 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.858062 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1225 13:27:27.858089 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.858143 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.858175 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.858237 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.858301 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1225 13:27:27.858358 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:28.004051 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1225 13:27:28.004125 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1225 13:27:28.004183 1482618 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1225 13:27:28.004226 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1225 13:27:28.004304 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1225 13:27:28.004369 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1225 13:27:28.005012 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1225 13:27:28.009472 1482618 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1225 13:27:28.009491 1482618 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1225 13:27:28.009550 1482618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1225 13:27:29.560553 1482618 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.550970125s)
	I1225 13:27:29.560586 1482618 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1225 13:27:29.560668 1482618 cache_images.go:92] LoadImages completed in 2.22170407s
	W1225 13:27:29.560766 1482618 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1225 13:27:29.560846 1482618 ssh_runner.go:195] Run: crio config
	I1225 13:27:29.639267 1482618 cni.go:84] Creating CNI manager for ""
	I1225 13:27:29.639298 1482618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:29.639324 1482618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:27:29.639375 1482618 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-198979 NodeName:old-k8s-version-198979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1225 13:27:29.639598 1482618 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-198979"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-198979
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.186:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:27:29.639711 1482618 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-198979 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-198979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:27:29.639800 1482618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1225 13:27:29.649536 1482618 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:27:29.649614 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:27:29.658251 1482618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1225 13:27:29.678532 1482618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:27:29.698314 1482618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1225 13:27:29.718873 1482618 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I1225 13:27:29.723656 1482618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:29.737736 1482618 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979 for IP: 192.168.39.186
	I1225 13:27:29.737787 1482618 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:29.738006 1482618 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:27:29.738069 1482618 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:27:29.738147 1482618 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.key
	I1225 13:27:29.738211 1482618 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.key.d0691019
	I1225 13:27:29.738252 1482618 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.key
	I1225 13:27:29.738456 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:27:29.738501 1482618 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:27:29.738511 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:27:29.738543 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:27:29.738578 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:27:29.738617 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:27:29.738682 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:29.739444 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:27:29.765303 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:27:29.790702 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:27:29.818835 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 13:27:29.845659 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:27:29.872043 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:27:29.902732 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:27:29.928410 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:27:29.954350 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:27:29.978557 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:27:30.007243 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:27:30.036876 1482618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:27:30.055990 1482618 ssh_runner.go:195] Run: openssl version
	I1225 13:27:30.062813 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:27:30.075937 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.082034 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.082145 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.089645 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:27:30.102657 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:27:30.115701 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.120635 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.120711 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.128051 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:27:30.139465 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:27:30.151046 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.156574 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.156656 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.162736 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:27:30.174356 1482618 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:27:30.180962 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:27:30.187746 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:27:30.194481 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:27:30.202279 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:27:30.210555 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:27:30.218734 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:27:30.225325 1482618 kubeadm.go:404] StartCluster: {Name:old-k8s-version-198979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-198979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:27:30.225424 1482618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:27:30.225478 1482618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:30.274739 1482618 cri.go:89] found id: ""
	I1225 13:27:30.274842 1482618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:27:30.285949 1482618 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:27:30.285980 1482618 kubeadm.go:636] restartCluster start
	I1225 13:27:30.286051 1482618 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:27:30.295521 1482618 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:30.296804 1482618 kubeconfig.go:92] found "old-k8s-version-198979" server: "https://192.168.39.186:8443"
	I1225 13:27:30.299493 1482618 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:27:30.308641 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:30.308745 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:30.320654 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:26.631365 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:29.129943 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:31.131590 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:30.329682 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:31.824743 1484104 pod_ready.go:92] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:31.824770 1484104 pod_ready.go:81] duration metric: took 8.010540801s waiting for pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.824781 1484104 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.830321 1484104 pod_ready.go:92] pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:31.830347 1484104 pod_ready.go:81] duration metric: took 5.559816ms waiting for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.830358 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.338865 1484104 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:32.338898 1484104 pod_ready.go:81] duration metric: took 508.532498ms waiting for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.338913 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.846030 1484104 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:33.846054 1484104 pod_ready.go:81] duration metric: took 1.507133449s waiting for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.846065 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wnjn2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.851826 1484104 pod_ready.go:92] pod "kube-proxy-wnjn2" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:33.851846 1484104 pod_ready.go:81] duration metric: took 5.775207ms waiting for pod "kube-proxy-wnjn2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.851855 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.054359 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:34.054586 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:30.809359 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:30.809482 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:30.821194 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:31.308690 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:31.308830 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:31.322775 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:31.809511 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:31.809612 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:31.823928 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:32.309450 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:32.309569 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:32.320937 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:32.809587 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:32.809686 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:32.822957 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.308905 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:33.308992 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:33.321195 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.808702 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:33.808803 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:33.820073 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:34.309661 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:34.309760 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:34.322931 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:34.809599 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:34.809724 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:34.825650 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:35.308697 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:35.308798 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:35.321313 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.630973 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.128884 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:35.859839 1484104 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.359809 1484104 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:36.359838 1484104 pod_ready.go:81] duration metric: took 2.507975576s waiting for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:36.359853 1484104 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:38.371707 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.554699 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:39.053732 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:35.809083 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:35.809186 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:35.821434 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:36.309100 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:36.309181 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:36.322566 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:36.809026 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:36.809136 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:36.820791 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:37.309382 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:37.309501 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:37.321365 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:37.809397 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:37.809515 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:37.821538 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:38.309716 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:38.309819 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:38.321060 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:38.809627 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:38.809728 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:38.821784 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:39.309363 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:39.309483 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:39.320881 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:39.809420 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:39.809597 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:39.820752 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:40.308911 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:40.309009 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:40.322568 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:40.322614 1482618 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:27:40.322653 1482618 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:27:40.322670 1482618 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:27:40.322730 1482618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:40.366271 1482618 cri.go:89] found id: ""
	I1225 13:27:40.366365 1482618 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:27:40.383123 1482618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:27:40.392329 1482618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:27:40.392412 1482618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:40.401435 1482618 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:40.401471 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:38.131920 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.629516 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.868311 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:42.872952 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:41.054026 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:43.054332 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.538996 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.466467 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.697265 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.796796 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.898179 1482618 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:27:41.898290 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:42.398616 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:42.899373 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.399246 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.898788 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.923617 1482618 api_server.go:72] duration metric: took 2.025431683s to wait for apiserver process to appear ...
	I1225 13:27:43.923650 1482618 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:27:43.923684 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:42.632296 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.128501 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.368613 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:47.868011 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.054778 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:47.559938 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:48.924695 1482618 api_server.go:269] stopped: https://192.168.39.186:8443/healthz: Get "https://192.168.39.186:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 13:27:48.924755 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:49.954284 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:49.954379 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:49.954401 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:49.985515 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:49.985568 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:50.424616 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:50.431560 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1225 13:27:50.431604 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1225 13:27:50.924173 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:50.935578 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1225 13:27:50.935622 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1225 13:27:51.424341 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:51.431709 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1225 13:27:51.440822 1482618 api_server.go:141] control plane version: v1.16.0
	I1225 13:27:51.440855 1482618 api_server.go:131] duration metric: took 7.517198191s to wait for apiserver health ...
	I1225 13:27:51.440866 1482618 cni.go:84] Creating CNI manager for ""
	I1225 13:27:51.440873 1482618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:51.442446 1482618 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:47.130936 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:49.132275 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:51.443830 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:51.456628 1482618 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:51.477822 1482618 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:51.487046 1482618 system_pods.go:59] 7 kube-system pods found
	I1225 13:27:51.487082 1482618 system_pods.go:61] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:27:51.487087 1482618 system_pods.go:61] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:27:51.487091 1482618 system_pods.go:61] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:27:51.487096 1482618 system_pods.go:61] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Pending
	I1225 13:27:51.487100 1482618 system_pods.go:61] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:27:51.487103 1482618 system_pods.go:61] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:27:51.487107 1482618 system_pods.go:61] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:27:51.487113 1482618 system_pods.go:74] duration metric: took 9.266811ms to wait for pod list to return data ...
	I1225 13:27:51.487120 1482618 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:51.491782 1482618 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:51.491817 1482618 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:51.491831 1482618 node_conditions.go:105] duration metric: took 4.70597ms to run NodePressure ...
	I1225 13:27:51.491855 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:51.768658 1482618 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:51.776258 1482618 kubeadm.go:787] kubelet initialised
	I1225 13:27:51.776283 1482618 kubeadm.go:788] duration metric: took 7.588357ms waiting for restarted kubelet to initialise ...
	I1225 13:27:51.776293 1482618 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:51.784053 1482618 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.791273 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.791314 1482618 pod_ready.go:81] duration metric: took 7.223677ms waiting for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.791328 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.791338 1482618 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.801453 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "etcd-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.801491 1482618 pod_ready.go:81] duration metric: took 10.138221ms waiting for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.801505 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "etcd-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.801514 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.809536 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.809577 1482618 pod_ready.go:81] duration metric: took 8.051285ms waiting for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.809590 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.809608 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.882231 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.882268 1482618 pod_ready.go:81] duration metric: took 72.643349ms waiting for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.882299 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.882309 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:52.282486 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-proxy-vw9lf" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.282531 1482618 pod_ready.go:81] duration metric: took 400.208562ms waiting for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:52.282543 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-proxy-vw9lf" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.282552 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:52.689279 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.689329 1482618 pod_ready.go:81] duration metric: took 406.764819ms waiting for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:52.689343 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.689353 1482618 pod_ready.go:38] duration metric: took 913.049281ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:52.689387 1482618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:27:52.705601 1482618 ops.go:34] apiserver oom_adj: -16
	I1225 13:27:52.705628 1482618 kubeadm.go:640] restartCluster took 22.419638621s
	I1225 13:27:52.705639 1482618 kubeadm.go:406] StartCluster complete in 22.480335985s
	I1225 13:27:52.705663 1482618 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:52.705760 1482618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:27:52.708825 1482618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:52.709185 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:27:52.709313 1482618 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:27:52.709404 1482618 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709427 1482618 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-198979"
	W1225 13:27:52.709435 1482618 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:27:52.709443 1482618 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709460 1482618 config.go:182] Loaded profile config "old-k8s-version-198979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1225 13:27:52.709466 1482618 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709475 1482618 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-198979"
	I1225 13:27:52.709482 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.709488 1482618 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-198979"
	W1225 13:27:52.709502 1482618 addons.go:246] addon metrics-server should already be in state true
	I1225 13:27:52.709553 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.709914 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.709953 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.709964 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.709992 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.709965 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.710046 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.729360 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1225 13:27:52.730016 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.730343 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I1225 13:27:52.730527 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I1225 13:27:52.730777 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.730808 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.730852 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.731329 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.731365 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.731381 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.731589 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.731638 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.731715 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.732311 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.732360 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.732731 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.732763 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.733225 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.733787 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.733859 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.735675 1482618 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-198979"
	W1225 13:27:52.735694 1482618 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:27:52.735725 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.736079 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.736117 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.751072 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40177
	I1225 13:27:52.752097 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.753002 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.753022 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.753502 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.753741 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.756158 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.758410 1482618 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:52.758080 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42869
	I1225 13:27:52.759927 1482618 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:52.759942 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:27:52.759963 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.760521 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.761648 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.761665 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.762046 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.762823 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.762872 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.763974 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.764712 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.764748 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.764752 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.765009 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.765216 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I1225 13:27:52.765216 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.765461 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.791493 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.792265 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.792294 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.792795 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.793023 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.795238 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.799536 1482618 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:27:52.800892 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:27:52.800920 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:27:52.800955 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.804762 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.806571 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.806568 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.806606 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.806957 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.807115 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.807260 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.811419 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I1225 13:27:52.811816 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.812352 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.812379 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.812872 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.813083 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.814823 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.815122 1482618 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:52.815138 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:27:52.815158 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.818411 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.818892 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.818926 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.819253 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.819504 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.819705 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.819981 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.963144 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:52.974697 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:27:52.974733 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:27:53.021391 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:53.039959 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:27:53.039991 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:27:53.121390 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:53.121421 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:27:53.196232 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:53.256419 1482618 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-198979" context rescaled to 1 replicas
	I1225 13:27:53.256479 1482618 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:27:53.258366 1482618 out.go:177] * Verifying Kubernetes components...
	I1225 13:27:53.259807 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:27:53.276151 1482618 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:27:53.687341 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.687374 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.687666 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.687690 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.687701 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.687710 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.689261 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.689286 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.689294 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.725954 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.725985 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.726715 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.726737 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.726743 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.726776 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.726787 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.727040 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.727054 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.727061 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.744318 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.744356 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.744696 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.744745 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.846817 1482618 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-198979" to be "Ready" ...
	I1225 13:27:53.846878 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.846899 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.847234 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.847301 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.847317 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.847329 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.847351 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.847728 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.847767 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.847793 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.847810 1482618 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-198979"
	I1225 13:27:53.850107 1482618 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:27:49.870506 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:52.369916 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:50.056130 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:52.562555 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:53.851456 1482618 addons.go:508] enable addons completed in 1.14214354s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:27:51.635205 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:54.131852 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:54.868902 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:57.367267 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:59.368997 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:55.057522 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:57.555214 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:55.851206 1482618 node_ready.go:58] node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:58.350906 1482618 node_ready.go:58] node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:28:00.350892 1482618 node_ready.go:49] node "old-k8s-version-198979" has status "Ready":"True"
	I1225 13:28:00.350918 1482618 node_ready.go:38] duration metric: took 6.504066205s waiting for node "old-k8s-version-198979" to be "Ready" ...
	I1225 13:28:00.350928 1482618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:28:00.355882 1482618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.362249 1482618 pod_ready.go:92] pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.362281 1482618 pod_ready.go:81] duration metric: took 6.362168ms waiting for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.362290 1482618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.367738 1482618 pod_ready.go:92] pod "etcd-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.367777 1482618 pod_ready.go:81] duration metric: took 5.478984ms waiting for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.367790 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.373724 1482618 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.373754 1482618 pod_ready.go:81] duration metric: took 5.95479ms waiting for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.373774 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.380810 1482618 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.380841 1482618 pod_ready.go:81] duration metric: took 7.058206ms waiting for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.380854 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:56.635216 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:59.129464 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:01.132131 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:00.750612 1482618 pod_ready.go:92] pod "kube-proxy-vw9lf" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.750641 1482618 pod_ready.go:81] duration metric: took 369.779347ms waiting for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.750651 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:01.151567 1482618 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:01.151596 1482618 pod_ready.go:81] duration metric: took 400.937167ms waiting for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:01.151617 1482618 pod_ready.go:38] duration metric: took 800.677743ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:28:01.151634 1482618 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:28:01.151694 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:28:01.170319 1482618 api_server.go:72] duration metric: took 7.913795186s to wait for apiserver process to appear ...
	I1225 13:28:01.170349 1482618 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:28:01.170368 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:28:01.177133 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1225 13:28:01.178326 1482618 api_server.go:141] control plane version: v1.16.0
	I1225 13:28:01.178351 1482618 api_server.go:131] duration metric: took 7.994163ms to wait for apiserver health ...
	I1225 13:28:01.178361 1482618 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:28:01.352663 1482618 system_pods.go:59] 7 kube-system pods found
	I1225 13:28:01.352693 1482618 system_pods.go:61] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:28:01.352697 1482618 system_pods.go:61] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:28:01.352702 1482618 system_pods.go:61] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:28:01.352706 1482618 system_pods.go:61] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Running
	I1225 13:28:01.352710 1482618 system_pods.go:61] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:28:01.352714 1482618 system_pods.go:61] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:28:01.352718 1482618 system_pods.go:61] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:28:01.352724 1482618 system_pods.go:74] duration metric: took 174.35745ms to wait for pod list to return data ...
	I1225 13:28:01.352731 1482618 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:28:01.554095 1482618 default_sa.go:45] found service account: "default"
	I1225 13:28:01.554129 1482618 default_sa.go:55] duration metric: took 201.391529ms for default service account to be created ...
	I1225 13:28:01.554139 1482618 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:28:01.757666 1482618 system_pods.go:86] 7 kube-system pods found
	I1225 13:28:01.757712 1482618 system_pods.go:89] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:28:01.757724 1482618 system_pods.go:89] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:28:01.757731 1482618 system_pods.go:89] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:28:01.757747 1482618 system_pods.go:89] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Running
	I1225 13:28:01.757754 1482618 system_pods.go:89] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:28:01.757763 1482618 system_pods.go:89] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:28:01.757769 1482618 system_pods.go:89] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:28:01.757785 1482618 system_pods.go:126] duration metric: took 203.63938ms to wait for k8s-apps to be running ...
	I1225 13:28:01.757800 1482618 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:28:01.757863 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:28:01.771792 1482618 system_svc.go:56] duration metric: took 13.980705ms WaitForService to wait for kubelet.
	I1225 13:28:01.771821 1482618 kubeadm.go:581] duration metric: took 8.515309843s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:28:01.771843 1482618 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:28:01.952426 1482618 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:28:01.952463 1482618 node_conditions.go:123] node cpu capacity is 2
	I1225 13:28:01.952477 1482618 node_conditions.go:105] duration metric: took 180.629128ms to run NodePressure ...
	I1225 13:28:01.952493 1482618 start.go:228] waiting for startup goroutines ...
	I1225 13:28:01.952500 1482618 start.go:233] waiting for cluster config update ...
	I1225 13:28:01.952512 1482618 start.go:242] writing updated cluster config ...
	I1225 13:28:01.952974 1482618 ssh_runner.go:195] Run: rm -f paused
	I1225 13:28:02.007549 1482618 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I1225 13:28:02.009559 1482618 out.go:177] 
	W1225 13:28:02.011242 1482618 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I1225 13:28:02.012738 1482618 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1225 13:28:02.014029 1482618 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-198979" cluster and "default" namespace by default
	I1225 13:28:01.869370 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:04.368824 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:00.055713 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:02.553981 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:04.554824 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:03.629358 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:06.130616 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:06.869993 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:09.367869 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:07.054835 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:09.554904 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:08.130786 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:10.632435 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:11.368789 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:13.867665 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:12.054007 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:14.554676 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:13.129854 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:15.628997 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:15.869048 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:18.368070 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:16.557633 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:19.054486 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:17.629072 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:20.129902 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:20.868173 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:22.868637 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:21.555027 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:24.054858 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:22.133148 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:24.630133 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:25.369437 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:27.870029 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:26.056198 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:28.555876 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:27.129583 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:29.629963 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:30.367773 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:32.368497 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:34.369791 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:31.053212 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:33.054315 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:32.128310 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:34.130650 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:36.869325 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:39.367488 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:35.056761 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:37.554917 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:36.632857 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:39.129518 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:41.368425 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:43.868157 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:40.054854 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:42.555015 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:45.053900 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:41.630558 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:44.132072 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:46.366422 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:48.368331 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:47.056378 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:49.555186 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:46.629415 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:49.129249 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:51.129692 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:50.868321 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:53.366805 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:52.053785 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:54.057533 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:53.629427 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:55.629652 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:55.368197 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:57.867659 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.868187 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:56.556558 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.055474 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:57.629912 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.630858 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:01.868360 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:03.870936 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:01.555132 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:04.053887 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:02.127901 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:04.131186 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.367634 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:08.867571 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.054546 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:08.554559 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.629995 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:09.129898 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:10.868677 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:12.868979 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:11.055554 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:13.554637 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:11.629511 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:14.129806 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:14.872549 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:17.371705 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:19.868438 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:16.054016 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:18.055476 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:16.629688 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:18.630125 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:21.132102 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:22.367525 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:24.369464 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:20.554660 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:22.556044 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:25.054213 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:23.630061 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:26.132281 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:26.868977 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:29.367384 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:27.055844 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:29.554124 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:28.630474 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:30.631070 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:31.367691 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:33.867941 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:31.555167 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:33.557066 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:32.634599 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:35.131402 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:36.369081 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:38.868497 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:36.054764 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:38.054975 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:37.629895 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:39.630456 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:41.366745 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:43.367883 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:40.554998 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:42.555257 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:42.130638 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:44.629851 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:45.371692 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:47.866965 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:49.868100 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:45.057506 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:47.555247 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:46.632874 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:49.129782 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:51.130176 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:51.868818 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:53.868968 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:50.055939 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:52.556609 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:55.054048 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:53.132556 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:55.632608 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:56.368065 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:58.868076 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:57.054224 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:59.554940 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:58.128545 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:00.129437 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:00.868364 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:03.368093 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:02.054215 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:04.056019 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:02.129706 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:04.130092 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:05.867992 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:07.872121 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:06.554889 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:09.056197 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:06.630974 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:08.632171 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:11.128952 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:10.367536 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:12.369331 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:11.554738 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:13.555681 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:13.129878 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:15.130470 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:14.868630 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:17.367768 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:19.368295 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:16.054391 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:18.054606 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:17.630479 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:19.630971 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:21.873194 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:24.368931 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:20.054866 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:22.554974 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:25.053696 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:22.130831 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:24.630755 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:26.867555 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:28.868612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:27.054706 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:29.055614 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:27.133840 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:29.630572 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:30.868716 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:33.369710 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:31.554882 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:33.556367 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:32.129865 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:34.129987 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:35.870671 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:38.367237 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:35.557755 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:37.559481 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:36.630513 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:39.130271 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:40.368072 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:42.869043 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:40.055427 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:42.554787 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:45.053876 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:41.629178 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:43.630237 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:45.631199 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:44.873439 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:47.367548 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:49.368066 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:47.555106 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:49.556132 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:48.130206 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:50.629041 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:51.369311 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:53.870853 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:52.055511 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:54.061135 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:52.630215 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:55.130153 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:55.873755 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:58.367682 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:56.554861 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:59.054344 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:57.629571 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:59.630560 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:00.372506 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:02.867084 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:01.554332 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:03.554717 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.555955 1483118 pod_ready.go:81] duration metric: took 4m0.009196678s waiting for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:04.555987 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:31:04.555994 1483118 pod_ready.go:38] duration metric: took 4m2.890580557s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:04.556014 1483118 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:31:04.556050 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:04.556152 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:04.615717 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:04.615748 1483118 cri.go:89] found id: ""
	I1225 13:31:04.615759 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:04.615830 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.621669 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:04.621778 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:04.661088 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:04.661127 1483118 cri.go:89] found id: ""
	I1225 13:31:04.661139 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:04.661191 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.666410 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:04.666496 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:04.710927 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:04.710962 1483118 cri.go:89] found id: ""
	I1225 13:31:04.710973 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:04.711041 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.715505 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:04.715587 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:04.761494 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:04.761518 1483118 cri.go:89] found id: ""
	I1225 13:31:04.761527 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:04.761580 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.766925 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:04.767015 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:04.810640 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:04.810670 1483118 cri.go:89] found id: ""
	I1225 13:31:04.810685 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:04.810753 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.815190 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:04.815285 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:04.858275 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:04.858301 1483118 cri.go:89] found id: ""
	I1225 13:31:04.858309 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:04.858362 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.863435 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:04.863529 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:04.914544 1483118 cri.go:89] found id: ""
	I1225 13:31:04.914583 1483118 logs.go:284] 0 containers: []
	W1225 13:31:04.914594 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:04.914603 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:04.914675 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:04.969548 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:04.969577 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:04.969584 1483118 cri.go:89] found id: ""
	I1225 13:31:04.969594 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:04.969660 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.974172 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.978956 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:04.978989 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:05.033590 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:05.033632 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:02.133447 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.630226 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.869025 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:07.368392 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:09.369061 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:05.085851 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:05.085879 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:05.144002 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:05.144047 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:05.191669 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:05.191703 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:05.238581 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:05.238617 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:05.253236 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:05.253271 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:05.293626 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:05.293674 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:05.338584 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:05.338622 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:05.381135 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:05.381172 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:05.886860 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:05.886918 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:06.045040 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:06.045080 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:06.101152 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:06.101192 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:08.662518 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:31:08.678649 1483118 api_server.go:72] duration metric: took 4m14.820531999s to wait for apiserver process to appear ...
	I1225 13:31:08.678687 1483118 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:31:08.678729 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:08.678791 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:08.718202 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:08.718246 1483118 cri.go:89] found id: ""
	I1225 13:31:08.718255 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:08.718305 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.723089 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:08.723177 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:08.772619 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:08.772641 1483118 cri.go:89] found id: ""
	I1225 13:31:08.772649 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:08.772709 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.777577 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:08.777669 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:08.818869 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:08.818900 1483118 cri.go:89] found id: ""
	I1225 13:31:08.818910 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:08.818970 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.823301 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:08.823382 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:08.868885 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:08.868913 1483118 cri.go:89] found id: ""
	I1225 13:31:08.868924 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:08.868982 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.873489 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:08.873562 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:08.916925 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:08.916957 1483118 cri.go:89] found id: ""
	I1225 13:31:08.916967 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:08.917065 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.921808 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:08.921901 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:08.961586 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:08.961617 1483118 cri.go:89] found id: ""
	I1225 13:31:08.961628 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:08.961707 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.965986 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:08.966096 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:09.012223 1483118 cri.go:89] found id: ""
	I1225 13:31:09.012262 1483118 logs.go:284] 0 containers: []
	W1225 13:31:09.012270 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:09.012278 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:09.012343 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:09.060646 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:09.060675 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:09.060683 1483118 cri.go:89] found id: ""
	I1225 13:31:09.060694 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:09.060767 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:09.065955 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:09.070859 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:09.070890 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:09.128056 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:09.128096 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:09.179304 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:09.179341 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:09.194019 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:09.194048 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:09.339697 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:09.339743 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:09.389626 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:09.389669 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:09.831437 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:09.831498 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:09.888799 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:09.888848 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:09.932201 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:09.932232 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:09.983201 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:09.983242 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:10.039094 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:10.039149 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:06.630567 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:09.130605 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:11.369445 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:13.870404 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:10.095628 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:10.095677 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:10.139678 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:10.139717 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:12.688297 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:31:12.693469 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 200:
	ok
	I1225 13:31:12.694766 1483118 api_server.go:141] control plane version: v1.29.0-rc.2
	I1225 13:31:12.694788 1483118 api_server.go:131] duration metric: took 4.016094906s to wait for apiserver health ...
	I1225 13:31:12.694796 1483118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:31:12.694821 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:12.694876 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:12.743143 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:12.743174 1483118 cri.go:89] found id: ""
	I1225 13:31:12.743185 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:12.743238 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.747708 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:12.747803 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:12.800511 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:12.800540 1483118 cri.go:89] found id: ""
	I1225 13:31:12.800549 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:12.800612 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.805236 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:12.805308 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:12.850047 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:12.850081 1483118 cri.go:89] found id: ""
	I1225 13:31:12.850092 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:12.850152 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.854516 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:12.854602 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:12.902131 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:12.902162 1483118 cri.go:89] found id: ""
	I1225 13:31:12.902173 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:12.902239 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.907546 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:12.907634 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:12.966561 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:12.966590 1483118 cri.go:89] found id: ""
	I1225 13:31:12.966601 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:12.966674 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.971071 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:12.971161 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:13.026823 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:13.026851 1483118 cri.go:89] found id: ""
	I1225 13:31:13.026862 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:13.026927 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.031499 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:13.031576 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:13.077486 1483118 cri.go:89] found id: ""
	I1225 13:31:13.077512 1483118 logs.go:284] 0 containers: []
	W1225 13:31:13.077520 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:13.077526 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:13.077589 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:13.130262 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:13.130287 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:13.130294 1483118 cri.go:89] found id: ""
	I1225 13:31:13.130305 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:13.130364 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.138345 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.142749 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:13.142780 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:13.264652 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:13.264694 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:13.315138 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:13.315182 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:13.375532 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:13.375570 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:13.418188 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:13.418226 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:13.433392 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:13.433423 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:13.472447 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:13.472481 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:13.514578 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:13.514631 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:13.568962 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:13.569001 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:13.609819 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:13.609864 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:13.668114 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:13.668160 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:13.710116 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:13.710155 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:14.068484 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:14.068548 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:11.629829 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:13.632277 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:15.629964 1483946 pod_ready.go:81] duration metric: took 4m0.008391697s waiting for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:15.629997 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:31:15.630006 1483946 pod_ready.go:38] duration metric: took 4m4.430454443s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:15.630022 1483946 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:31:15.630052 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:15.630113 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:15.694629 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:15.694654 1483946 cri.go:89] found id: ""
	I1225 13:31:15.694666 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:15.694735 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.699777 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:15.699847 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:15.744267 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:15.744299 1483946 cri.go:89] found id: ""
	I1225 13:31:15.744308 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:15.744361 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.749213 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:15.749310 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:15.796903 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:15.796930 1483946 cri.go:89] found id: ""
	I1225 13:31:15.796939 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:15.797001 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.801601 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:15.801673 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:15.841792 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:15.841820 1483946 cri.go:89] found id: ""
	I1225 13:31:15.841830 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:15.841902 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.845893 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:15.845970 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:15.901462 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:15.901493 1483946 cri.go:89] found id: ""
	I1225 13:31:15.901505 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:15.901589 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.907173 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:15.907264 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:15.957143 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:15.957177 1483946 cri.go:89] found id: ""
	I1225 13:31:15.957186 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:15.957239 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.962715 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:15.962789 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:16.007949 1483946 cri.go:89] found id: ""
	I1225 13:31:16.007988 1483946 logs.go:284] 0 containers: []
	W1225 13:31:16.007999 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:16.008008 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:16.008076 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:16.063958 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:16.063984 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:16.063989 1483946 cri.go:89] found id: ""
	I1225 13:31:16.063997 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:16.064052 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:16.069193 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:16.074310 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:16.074333 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:16.120318 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:16.120363 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:16.176217 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:16.176264 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:16.633470 1483118 system_pods.go:59] 8 kube-system pods found
	I1225 13:31:16.633507 1483118 system_pods.go:61] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running
	I1225 13:31:16.633512 1483118 system_pods.go:61] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running
	I1225 13:31:16.633516 1483118 system_pods.go:61] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running
	I1225 13:31:16.633521 1483118 system_pods.go:61] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running
	I1225 13:31:16.633525 1483118 system_pods.go:61] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running
	I1225 13:31:16.633529 1483118 system_pods.go:61] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running
	I1225 13:31:16.633536 1483118 system_pods.go:61] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:16.633541 1483118 system_pods.go:61] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running
	I1225 13:31:16.633548 1483118 system_pods.go:74] duration metric: took 3.938745899s to wait for pod list to return data ...
	I1225 13:31:16.633556 1483118 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:31:16.637279 1483118 default_sa.go:45] found service account: "default"
	I1225 13:31:16.637314 1483118 default_sa.go:55] duration metric: took 3.749637ms for default service account to be created ...
	I1225 13:31:16.637325 1483118 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:31:16.644466 1483118 system_pods.go:86] 8 kube-system pods found
	I1225 13:31:16.644501 1483118 system_pods.go:89] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running
	I1225 13:31:16.644509 1483118 system_pods.go:89] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running
	I1225 13:31:16.644516 1483118 system_pods.go:89] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running
	I1225 13:31:16.644523 1483118 system_pods.go:89] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running
	I1225 13:31:16.644530 1483118 system_pods.go:89] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running
	I1225 13:31:16.644536 1483118 system_pods.go:89] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running
	I1225 13:31:16.644547 1483118 system_pods.go:89] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:16.644558 1483118 system_pods.go:89] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running
	I1225 13:31:16.644583 1483118 system_pods.go:126] duration metric: took 7.250639ms to wait for k8s-apps to be running ...
	I1225 13:31:16.644594 1483118 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:31:16.644658 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:16.661680 1483118 system_svc.go:56] duration metric: took 17.070893ms WaitForService to wait for kubelet.
	I1225 13:31:16.661723 1483118 kubeadm.go:581] duration metric: took 4m22.80360778s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:31:16.661754 1483118 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:31:16.666189 1483118 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:31:16.666227 1483118 node_conditions.go:123] node cpu capacity is 2
	I1225 13:31:16.666294 1483118 node_conditions.go:105] duration metric: took 4.531137ms to run NodePressure ...
	I1225 13:31:16.666313 1483118 start.go:228] waiting for startup goroutines ...
	I1225 13:31:16.666323 1483118 start.go:233] waiting for cluster config update ...
	I1225 13:31:16.666338 1483118 start.go:242] writing updated cluster config ...
	I1225 13:31:16.666702 1483118 ssh_runner.go:195] Run: rm -f paused
	I1225 13:31:16.729077 1483118 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I1225 13:31:16.732824 1483118 out.go:177] * Done! kubectl is now configured to use "no-preload-330063" cluster and "default" namespace by default
	I1225 13:31:16.368392 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:18.374788 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:16.686611 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:16.686650 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:16.748667 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:16.748705 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:16.937661 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:16.937700 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:16.988870 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:16.988908 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:17.048278 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:17.048316 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:17.095857 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:17.095900 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:17.135425 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:17.135460 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:17.197626 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:17.197670 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:17.213658 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:17.213695 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:17.282101 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:17.282149 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:19.824939 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:31:19.840944 1483946 api_server.go:72] duration metric: took 4m11.866743679s to wait for apiserver process to appear ...
	I1225 13:31:19.840985 1483946 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:31:19.841036 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:19.841114 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:19.895404 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:19.895445 1483946 cri.go:89] found id: ""
	I1225 13:31:19.895455 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:19.895519 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.900604 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:19.900686 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:19.943623 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:19.943652 1483946 cri.go:89] found id: ""
	I1225 13:31:19.943662 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:19.943728 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.948230 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:19.948298 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:19.993271 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:19.993296 1483946 cri.go:89] found id: ""
	I1225 13:31:19.993304 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:19.993355 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.997702 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:19.997790 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:20.043487 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:20.043514 1483946 cri.go:89] found id: ""
	I1225 13:31:20.043525 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:20.043591 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.047665 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:20.047748 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:20.091832 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:20.091867 1483946 cri.go:89] found id: ""
	I1225 13:31:20.091878 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:20.091947 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.096400 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:20.096463 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:20.136753 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:20.136785 1483946 cri.go:89] found id: ""
	I1225 13:31:20.136794 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:20.136867 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.141479 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:20.141559 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:20.184635 1483946 cri.go:89] found id: ""
	I1225 13:31:20.184677 1483946 logs.go:284] 0 containers: []
	W1225 13:31:20.184688 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:20.184694 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:20.184770 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:20.231891 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:20.231918 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:20.231923 1483946 cri.go:89] found id: ""
	I1225 13:31:20.231932 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:20.231991 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.236669 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.240776 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:20.240804 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:20.305411 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:20.305479 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:20.376688 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:20.376729 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:20.419016 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:20.419060 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:20.465253 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:20.465288 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:20.505949 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:20.505994 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:20.565939 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:20.565995 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:20.608765 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:20.608798 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:20.646031 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:20.646076 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:20.694772 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:20.694812 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:20.710038 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:20.710074 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:20.841944 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:20.841996 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:21.267824 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:21.267884 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:20.869158 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:22.870463 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:23.834749 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:31:23.840763 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 200:
	ok
	I1225 13:31:23.842396 1483946 api_server.go:141] control plane version: v1.28.4
	I1225 13:31:23.842424 1483946 api_server.go:131] duration metric: took 4.001431078s to wait for apiserver health ...
	I1225 13:31:23.842451 1483946 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:31:23.842481 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:23.842535 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:23.901377 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:23.901409 1483946 cri.go:89] found id: ""
	I1225 13:31:23.901420 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:23.901489 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:23.906312 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:23.906382 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:23.957073 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:23.957105 1483946 cri.go:89] found id: ""
	I1225 13:31:23.957115 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:23.957175 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:23.961899 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:23.961968 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:24.009529 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:24.009575 1483946 cri.go:89] found id: ""
	I1225 13:31:24.009587 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:24.009656 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.014579 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:24.014668 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:24.059589 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:24.059618 1483946 cri.go:89] found id: ""
	I1225 13:31:24.059629 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:24.059698 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.065185 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:24.065265 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:24.123904 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:24.123932 1483946 cri.go:89] found id: ""
	I1225 13:31:24.123942 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:24.124006 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.128753 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:24.128849 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:24.172259 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:24.172285 1483946 cri.go:89] found id: ""
	I1225 13:31:24.172296 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:24.172363 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.177276 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:24.177356 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:24.223415 1483946 cri.go:89] found id: ""
	I1225 13:31:24.223445 1483946 logs.go:284] 0 containers: []
	W1225 13:31:24.223453 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:24.223459 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:24.223516 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:24.267840 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:24.267866 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:24.267870 1483946 cri.go:89] found id: ""
	I1225 13:31:24.267878 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:24.267939 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.272947 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.279183 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:24.279213 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:24.343548 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:24.343592 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:24.398275 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:24.398312 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:24.443435 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:24.443472 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:24.814711 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:24.814770 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:24.828613 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:24.828649 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:24.979501 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:24.979538 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:25.028976 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:25.029011 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:25.083148 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:25.083191 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:25.155284 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:25.155336 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:25.213437 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:25.213483 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:25.260934 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:25.260973 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:25.307395 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:25.307430 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:27.884673 1483946 system_pods.go:59] 8 kube-system pods found
	I1225 13:31:27.884702 1483946 system_pods.go:61] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running
	I1225 13:31:27.884708 1483946 system_pods.go:61] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running
	I1225 13:31:27.884713 1483946 system_pods.go:61] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running
	I1225 13:31:27.884717 1483946 system_pods.go:61] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running
	I1225 13:31:27.884721 1483946 system_pods.go:61] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running
	I1225 13:31:27.884725 1483946 system_pods.go:61] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running
	I1225 13:31:27.884731 1483946 system_pods.go:61] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:27.884737 1483946 system_pods.go:61] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:31:27.884744 1483946 system_pods.go:74] duration metric: took 4.04228589s to wait for pod list to return data ...
	I1225 13:31:27.884752 1483946 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:31:27.889125 1483946 default_sa.go:45] found service account: "default"
	I1225 13:31:27.889156 1483946 default_sa.go:55] duration metric: took 4.397454ms for default service account to be created ...
	I1225 13:31:27.889167 1483946 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:31:27.896851 1483946 system_pods.go:86] 8 kube-system pods found
	I1225 13:31:27.896879 1483946 system_pods.go:89] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running
	I1225 13:31:27.896884 1483946 system_pods.go:89] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running
	I1225 13:31:27.896889 1483946 system_pods.go:89] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running
	I1225 13:31:27.896894 1483946 system_pods.go:89] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running
	I1225 13:31:27.896898 1483946 system_pods.go:89] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running
	I1225 13:31:27.896901 1483946 system_pods.go:89] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running
	I1225 13:31:27.896908 1483946 system_pods.go:89] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:27.896912 1483946 system_pods.go:89] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:31:27.896920 1483946 system_pods.go:126] duration metric: took 7.747348ms to wait for k8s-apps to be running ...
	I1225 13:31:27.896929 1483946 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:31:27.896981 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:27.917505 1483946 system_svc.go:56] duration metric: took 20.559839ms WaitForService to wait for kubelet.
	I1225 13:31:27.917542 1483946 kubeadm.go:581] duration metric: took 4m19.94335169s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:31:27.917568 1483946 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:31:27.921689 1483946 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:31:27.921715 1483946 node_conditions.go:123] node cpu capacity is 2
	I1225 13:31:27.921797 1483946 node_conditions.go:105] duration metric: took 4.219723ms to run NodePressure ...
	I1225 13:31:27.921814 1483946 start.go:228] waiting for startup goroutines ...
	I1225 13:31:27.921825 1483946 start.go:233] waiting for cluster config update ...
	I1225 13:31:27.921838 1483946 start.go:242] writing updated cluster config ...
	I1225 13:31:27.922130 1483946 ssh_runner.go:195] Run: rm -f paused
	I1225 13:31:27.976011 1483946 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 13:31:27.978077 1483946 out.go:177] * Done! kubectl is now configured to use "embed-certs-880612" cluster and "default" namespace by default
	I1225 13:31:24.870628 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:26.873379 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:29.367512 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:31.367730 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:33.867551 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:36.360292 1484104 pod_ready.go:81] duration metric: took 4m0.000407846s waiting for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:36.360349 1484104 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" (will not retry!)
	I1225 13:31:36.360378 1484104 pod_ready.go:38] duration metric: took 4m12.556234617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:36.360445 1484104 kubeadm.go:640] restartCluster took 4m32.941510355s
	W1225 13:31:36.360540 1484104 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1225 13:31:36.360578 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1225 13:31:50.552320 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.191703988s)
	I1225 13:31:50.552417 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:50.569621 1484104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:31:50.581050 1484104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:31:50.591777 1484104 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:31:50.591837 1484104 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1225 13:31:50.651874 1484104 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1225 13:31:50.651952 1484104 kubeadm.go:322] [preflight] Running pre-flight checks
	I1225 13:31:50.822009 1484104 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 13:31:50.822174 1484104 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 13:31:50.822258 1484104 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1225 13:31:51.074237 1484104 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 13:31:51.077463 1484104 out.go:204]   - Generating certificates and keys ...
	I1225 13:31:51.077575 1484104 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1225 13:31:51.077637 1484104 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1225 13:31:51.077703 1484104 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1225 13:31:51.077755 1484104 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1225 13:31:51.077816 1484104 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1225 13:31:51.077908 1484104 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1225 13:31:51.078059 1484104 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1225 13:31:51.078715 1484104 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1225 13:31:51.079408 1484104 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1225 13:31:51.080169 1484104 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1225 13:31:51.080635 1484104 kubeadm.go:322] [certs] Using the existing "sa" key
	I1225 13:31:51.080724 1484104 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 13:31:51.147373 1484104 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 13:31:51.298473 1484104 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 13:31:51.403869 1484104 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 13:31:51.719828 1484104 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 13:31:51.720523 1484104 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 13:31:51.725276 1484104 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 13:31:51.727100 1484104 out.go:204]   - Booting up control plane ...
	I1225 13:31:51.727248 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 13:31:51.727343 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 13:31:51.727431 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 13:31:51.745500 1484104 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 13:31:51.746331 1484104 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 13:31:51.746392 1484104 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1225 13:31:51.897052 1484104 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1225 13:32:00.401261 1484104 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504339 seconds
	I1225 13:32:00.401463 1484104 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 13:32:00.422010 1484104 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 13:32:00.962174 1484104 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 13:32:00.962418 1484104 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-344803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 13:32:01.479956 1484104 kubeadm.go:322] [bootstrap-token] Using token: 7n7qlp.3wejtqrgqunjtf8y
	I1225 13:32:01.481699 1484104 out.go:204]   - Configuring RBAC rules ...
	I1225 13:32:01.481862 1484104 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 13:32:01.489709 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 13:32:01.499287 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 13:32:01.504520 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 13:32:01.508950 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 13:32:01.517277 1484104 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 13:32:01.537420 1484104 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 13:32:01.820439 1484104 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1225 13:32:01.897010 1484104 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1225 13:32:01.897039 1484104 kubeadm.go:322] 
	I1225 13:32:01.897139 1484104 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1225 13:32:01.897169 1484104 kubeadm.go:322] 
	I1225 13:32:01.897259 1484104 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1225 13:32:01.897270 1484104 kubeadm.go:322] 
	I1225 13:32:01.897292 1484104 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1225 13:32:01.897383 1484104 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 13:32:01.897471 1484104 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 13:32:01.897484 1484104 kubeadm.go:322] 
	I1225 13:32:01.897558 1484104 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1225 13:32:01.897568 1484104 kubeadm.go:322] 
	I1225 13:32:01.897621 1484104 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 13:32:01.897629 1484104 kubeadm.go:322] 
	I1225 13:32:01.897702 1484104 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1225 13:32:01.897822 1484104 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 13:32:01.897923 1484104 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 13:32:01.897935 1484104 kubeadm.go:322] 
	I1225 13:32:01.898040 1484104 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 13:32:01.898141 1484104 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1225 13:32:01.898156 1484104 kubeadm.go:322] 
	I1225 13:32:01.898264 1484104 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 7n7qlp.3wejtqrgqunjtf8y \
	I1225 13:32:01.898455 1484104 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 \
	I1225 13:32:01.898506 1484104 kubeadm.go:322] 	--control-plane 
	I1225 13:32:01.898516 1484104 kubeadm.go:322] 
	I1225 13:32:01.898627 1484104 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1225 13:32:01.898645 1484104 kubeadm.go:322] 
	I1225 13:32:01.898760 1484104 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 7n7qlp.3wejtqrgqunjtf8y \
	I1225 13:32:01.898898 1484104 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 13:32:01.899552 1484104 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 13:32:01.899699 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:32:01.899720 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:32:01.902817 1484104 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:32:01.904375 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:32:01.943752 1484104 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:32:02.004751 1484104 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:32:02.004915 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da minikube.k8s.io/name=default-k8s-diff-port-344803 minikube.k8s.io/updated_at=2023_12_25T13_32_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.004920 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.377800 1484104 ops.go:34] apiserver oom_adj: -16
	I1225 13:32:02.378388 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.879083 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:03.379453 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:03.878676 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:04.378589 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:04.878630 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:05.378615 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:05.879009 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:06.379100 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:06.878610 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:07.378604 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:07.878597 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:08.379427 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:08.878637 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:09.378638 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:09.879200 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:10.378659 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:10.879285 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:11.378603 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:11.878605 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:12.379451 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:12.879431 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:13.379034 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:13.878468 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:14.378592 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:14.878569 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:15.008581 1484104 kubeadm.go:1088] duration metric: took 13.00372954s to wait for elevateKubeSystemPrivileges.
	I1225 13:32:15.008626 1484104 kubeadm.go:406] StartCluster complete in 5m11.652335467s
	I1225 13:32:15.008653 1484104 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:32:15.008763 1484104 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:32:15.011655 1484104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:32:15.011982 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:32:15.012172 1484104 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:32:15.012258 1484104 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012285 1484104 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.012297 1484104 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:32:15.012311 1484104 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012347 1484104 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-344803"
	I1225 13:32:15.012363 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.012798 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.012800 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.012831 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.012833 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.012898 1484104 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012912 1484104 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.012919 1484104 addons.go:246] addon metrics-server should already be in state true
	I1225 13:32:15.012961 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.012972 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:32:15.013289 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.013318 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.032424 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I1225 13:32:15.032981 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44439
	I1225 13:32:15.033180 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1225 13:32:15.033455 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.033575 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.033623 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.034052 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034069 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034173 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034195 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034209 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034238 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034412 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034635 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034693 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034728 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.036190 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.036205 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.036228 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.036229 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.040383 1484104 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.040442 1484104 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:32:15.040473 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.040780 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.040820 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.055366 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
	I1225 13:32:15.055979 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.056596 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.056623 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.056646 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I1225 13:32:15.056646 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41689
	I1225 13:32:15.057067 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.057205 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.057218 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.057413 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.057741 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.057768 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.057958 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.058013 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.058122 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.058413 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.058776 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.058816 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.059142 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.059588 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.061854 1484104 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:32:15.060849 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.063569 1484104 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:32:15.063593 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:32:15.065174 1484104 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:32:15.063622 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.066654 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:32:15.066677 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:32:15.066700 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.071209 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.071244 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.071995 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.072039 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.072074 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.072089 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.072244 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.072319 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.072500 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.072558 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.072875 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.072941 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.073085 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.073138 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.077927 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38519
	I1225 13:32:15.078428 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.079241 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.079262 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.079775 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.079983 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.081656 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.082002 1484104 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:32:15.082024 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:32:15.082047 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.085367 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.085779 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.085805 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.086119 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.086390 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.086656 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.086875 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.262443 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:32:15.262470 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:32:15.270730 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 13:32:15.285178 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:32:15.302070 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:32:15.302097 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:32:15.303686 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:32:15.373021 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:32:15.373054 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:32:15.461862 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:32:15.518928 1484104 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-344803" context rescaled to 1 replicas
	I1225 13:32:15.518973 1484104 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:32:15.520858 1484104 out.go:177] * Verifying Kubernetes components...
	I1225 13:32:15.522326 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:32:16.993620 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.72284687s)
	I1225 13:32:16.993667 1484104 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1225 13:32:17.329206 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.025471574s)
	I1225 13:32:17.329305 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329321 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329352 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.044135646s)
	I1225 13:32:17.329411 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329430 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329697 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.329722 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.329737 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329747 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.329764 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329740 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.329805 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.329825 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329838 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.331647 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.331675 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.331706 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.331715 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.331734 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.331766 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.350031 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.350068 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.350458 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.350499 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.350516 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.582723 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.120815372s)
	I1225 13:32:17.582785 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.582798 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.582787 1484104 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.060422325s)
	I1225 13:32:17.582838 1484104 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-344803" to be "Ready" ...
	I1225 13:32:17.583145 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.583172 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.583179 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.583192 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.583201 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.583438 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.583461 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.583471 1484104 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-344803"
	I1225 13:32:17.585288 1484104 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:32:17.586537 1484104 addons.go:508] enable addons completed in 2.574365441s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:32:17.595130 1484104 node_ready.go:49] node "default-k8s-diff-port-344803" has status "Ready":"True"
	I1225 13:32:17.595165 1484104 node_ready.go:38] duration metric: took 12.307997ms waiting for node "default-k8s-diff-port-344803" to be "Ready" ...
	I1225 13:32:17.595181 1484104 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:32:17.613099 1484104 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:19.621252 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:20.621494 1484104 pod_ready.go:92] pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.621519 1484104 pod_ready.go:81] duration metric: took 3.008379569s waiting for pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.621528 1484104 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.630348 1484104 pod_ready.go:92] pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.630375 1484104 pod_ready.go:81] duration metric: took 8.841316ms waiting for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.630387 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.636928 1484104 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.636953 1484104 pod_ready.go:81] duration metric: took 6.558203ms waiting for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.636963 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.643335 1484104 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.643360 1484104 pod_ready.go:81] duration metric: took 6.390339ms waiting for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.643369 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpk9s" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.649496 1484104 pod_ready.go:92] pod "kube-proxy-fpk9s" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.649526 1484104 pod_ready.go:81] duration metric: took 6.150243ms waiting for pod "kube-proxy-fpk9s" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.649535 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:21.018065 1484104 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:21.018092 1484104 pod_ready.go:81] duration metric: took 368.549291ms waiting for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:21.018102 1484104 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:23.026953 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:25.525822 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:27.530780 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:30.033601 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:32.528694 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:34.529208 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:37.028717 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:39.526632 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:42.026868 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:44.028002 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:46.526534 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:48.529899 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:51.026062 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:53.525655 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:55.526096 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:58.026355 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:00.026674 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:02.029299 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:04.526609 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:06.526810 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:09.026498 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:11.026612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:13.029416 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:15.526242 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:18.026664 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:20.529125 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:23.026694 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:25.029350 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:27.527537 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:30.030562 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:32.526381 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:34.526801 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:37.027939 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:39.526249 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:41.526511 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:43.526783 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:45.527693 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:48.026703 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:50.027582 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:52.526290 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:55.027458 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:57.526559 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:59.526699 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:01.527938 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:03.529353 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:06.025942 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:08.027340 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:10.028087 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:12.525688 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:14.527122 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:16.529380 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:19.026128 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:21.026183 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:23.027208 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:25.526282 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:27.531847 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:30.030025 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:32.526291 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:34.526470 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:36.527179 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:39.026270 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:41.029609 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:43.528905 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:46.026666 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:48.528560 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:51.025864 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:53.027211 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:55.527359 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:58.025696 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:00.027368 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:02.027605 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:04.525836 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:06.526571 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:08.528550 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:11.026765 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:13.028215 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:15.525903 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:17.527102 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:20.026011 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:22.525873 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:24.528380 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:27.026402 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:29.527869 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:32.026671 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:34.026737 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:36.026836 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:38.526788 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:41.027387 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:43.526936 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:46.026316 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:48.026940 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:50.526565 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:53.025988 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:55.027146 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:57.527287 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:00.028971 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:02.526704 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:05.025995 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:07.026612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:09.027839 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:11.526845 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:13.527737 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:16.026967 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:18.028747 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:20.527437 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:21.027372 1484104 pod_ready.go:81] duration metric: took 4m0.009244403s waiting for pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace to be "Ready" ...
	E1225 13:36:21.027405 1484104 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:36:21.027418 1484104 pod_ready.go:38] duration metric: took 4m3.432224558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:36:21.027474 1484104 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:36:21.027560 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:21.027806 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:21.090421 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:21.090464 1484104 cri.go:89] found id: ""
	I1225 13:36:21.090474 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:21.090526 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.095523 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:21.095605 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:21.139092 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:21.139126 1484104 cri.go:89] found id: ""
	I1225 13:36:21.139136 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:21.139206 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.143957 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:21.144038 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:21.190905 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:21.190937 1484104 cri.go:89] found id: ""
	I1225 13:36:21.190948 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:21.191018 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.195814 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:21.195882 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:21.240274 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:21.240307 1484104 cri.go:89] found id: ""
	I1225 13:36:21.240317 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:21.240384 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.244831 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:21.244930 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:21.289367 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:21.289399 1484104 cri.go:89] found id: ""
	I1225 13:36:21.289410 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:21.289478 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.293796 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:21.293878 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:21.338757 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:21.338789 1484104 cri.go:89] found id: ""
	I1225 13:36:21.338808 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:21.338878 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.343145 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:21.343217 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:21.384898 1484104 cri.go:89] found id: ""
	I1225 13:36:21.384929 1484104 logs.go:284] 0 containers: []
	W1225 13:36:21.384936 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:21.384943 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:21.385006 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:21.436776 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:21.436809 1484104 cri.go:89] found id: ""
	I1225 13:36:21.436818 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:21.436871 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.442173 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:21.442210 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:21.886890 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:21.886944 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:21.971380 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:21.971568 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:21.992672 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:21.992724 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:22.015144 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:22.015198 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:22.195011 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:22.195060 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:22.237377 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:22.237423 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:22.284207 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:22.284240 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:22.343882 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:22.343939 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:22.404320 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:22.404356 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:22.465126 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:22.465175 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:22.521920 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:22.521963 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:22.575563 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:22.575601 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:22.627508 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:22.627549 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:22.627808 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:22.627849 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:22.627862 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:22.627871 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:22.627882 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:32.629903 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:36:32.648435 1484104 api_server.go:72] duration metric: took 4m17.129427556s to wait for apiserver process to appear ...
	I1225 13:36:32.648461 1484104 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:36:32.648499 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:32.648567 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:32.705637 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:32.705673 1484104 cri.go:89] found id: ""
	I1225 13:36:32.705685 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:32.705754 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.710516 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:32.710591 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:32.757193 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:32.757225 1484104 cri.go:89] found id: ""
	I1225 13:36:32.757236 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:32.757302 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.762255 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:32.762335 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:32.812666 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:32.812692 1484104 cri.go:89] found id: ""
	I1225 13:36:32.812703 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:32.812758 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.817599 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:32.817676 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:32.861969 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:32.862011 1484104 cri.go:89] found id: ""
	I1225 13:36:32.862021 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:32.862084 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.868439 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:32.868525 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:32.929969 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:32.930006 1484104 cri.go:89] found id: ""
	I1225 13:36:32.930015 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:32.930077 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.936071 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:32.936149 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:32.980256 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:32.980280 1484104 cri.go:89] found id: ""
	I1225 13:36:32.980288 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:32.980345 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.985508 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:32.985605 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:33.029393 1484104 cri.go:89] found id: ""
	I1225 13:36:33.029429 1484104 logs.go:284] 0 containers: []
	W1225 13:36:33.029440 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:33.029448 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:33.029521 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:33.075129 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:33.075156 1484104 cri.go:89] found id: ""
	I1225 13:36:33.075167 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:33.075229 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:33.079900 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:33.079940 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:33.121355 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:33.121391 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:33.205175 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:33.205394 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:33.225359 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:33.225393 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:33.282658 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:33.282710 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:33.334586 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:33.334627 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:33.383538 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:33.383576 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:33.438245 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:33.438284 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:33.487260 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:33.487305 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:33.504627 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:33.504665 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:33.641875 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:33.641912 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:33.692275 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:33.692311 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:33.731932 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:33.731971 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:34.081286 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:34.081325 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:34.081438 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:34.081456 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:34.081465 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:34.081477 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:34.081490 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:44.083633 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:36:44.091721 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 200:
	ok
	I1225 13:36:44.093215 1484104 api_server.go:141] control plane version: v1.28.4
	I1225 13:36:44.093242 1484104 api_server.go:131] duration metric: took 11.444775391s to wait for apiserver health ...
	I1225 13:36:44.093251 1484104 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:36:44.093279 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:44.093330 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:44.135179 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:44.135212 1484104 cri.go:89] found id: ""
	I1225 13:36:44.135229 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:44.135308 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.140367 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:44.140455 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:44.179525 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:44.179557 1484104 cri.go:89] found id: ""
	I1225 13:36:44.179568 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:44.179644 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.184724 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:44.184822 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:44.225306 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:44.225339 1484104 cri.go:89] found id: ""
	I1225 13:36:44.225351 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:44.225418 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.230354 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:44.230459 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:44.272270 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:44.272300 1484104 cri.go:89] found id: ""
	I1225 13:36:44.272311 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:44.272387 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.277110 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:44.277187 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:44.326495 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:44.326519 1484104 cri.go:89] found id: ""
	I1225 13:36:44.326527 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:44.326579 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.333707 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:44.333799 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:44.380378 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:44.380410 1484104 cri.go:89] found id: ""
	I1225 13:36:44.380423 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:44.380488 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.390075 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:44.390171 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:44.440171 1484104 cri.go:89] found id: ""
	I1225 13:36:44.440211 1484104 logs.go:284] 0 containers: []
	W1225 13:36:44.440223 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:44.440233 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:44.440321 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:44.482074 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:44.482104 1484104 cri.go:89] found id: ""
	I1225 13:36:44.482114 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:44.482178 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.487171 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:44.487209 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:44.532144 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:44.532179 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:44.891521 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:44.891568 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:44.938934 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:44.938967 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:45.017433 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:45.017627 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:45.039058 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:45.039097 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:45.054560 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:45.054592 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:45.113698 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:45.113735 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:45.158302 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:45.158342 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:45.204784 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:45.204824 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:45.276442 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:45.276483 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:45.320645 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:45.320678 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:45.452638 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:45.452681 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:45.500718 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:45.500757 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:45.500817 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:45.500833 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:45.500844 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:45.500853 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:45.500859 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:55.510930 1484104 system_pods.go:59] 8 kube-system pods found
	I1225 13:36:55.510962 1484104 system_pods.go:61] "coredns-5dd5756b68-rbmbs" [cd5fc3c3-b9db-437d-8088-2f97921bc3bd] Running
	I1225 13:36:55.510968 1484104 system_pods.go:61] "etcd-default-k8s-diff-port-344803" [3824f946-c4e1-4e9c-a52f-3d6753ce9350] Running
	I1225 13:36:55.510973 1484104 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-344803" [81cf9f5a-6cc3-4d66-956f-6b8a4e2a1ef5] Running
	I1225 13:36:55.510977 1484104 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-344803" [b3cfc8b9-d03b-4a1e-9500-08bb08dc64f3] Running
	I1225 13:36:55.510984 1484104 system_pods.go:61] "kube-proxy-fpk9s" [17d80ffc-e149-4449-aec9-9d90a2fda282] Running
	I1225 13:36:55.510987 1484104 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-344803" [795b56ad-2ee1-45ef-8c7b-1b878be6b0d7] Running
	I1225 13:36:55.510995 1484104 system_pods.go:61] "metrics-server-57f55c9bc5-slv7p" [a51c534d-e6d8-48b9-852f-caf598c8853a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:36:55.510999 1484104 system_pods.go:61] "storage-provisioner" [4bee5e6e-1252-4b3d-8d6c-73515d8567e4] Running
	I1225 13:36:55.511014 1484104 system_pods.go:74] duration metric: took 11.417757674s to wait for pod list to return data ...
	I1225 13:36:55.511025 1484104 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:36:55.514087 1484104 default_sa.go:45] found service account: "default"
	I1225 13:36:55.514112 1484104 default_sa.go:55] duration metric: took 3.081452ms for default service account to be created ...
	I1225 13:36:55.514120 1484104 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:36:55.521321 1484104 system_pods.go:86] 8 kube-system pods found
	I1225 13:36:55.521355 1484104 system_pods.go:89] "coredns-5dd5756b68-rbmbs" [cd5fc3c3-b9db-437d-8088-2f97921bc3bd] Running
	I1225 13:36:55.521365 1484104 system_pods.go:89] "etcd-default-k8s-diff-port-344803" [3824f946-c4e1-4e9c-a52f-3d6753ce9350] Running
	I1225 13:36:55.521370 1484104 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-344803" [81cf9f5a-6cc3-4d66-956f-6b8a4e2a1ef5] Running
	I1225 13:36:55.521375 1484104 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-344803" [b3cfc8b9-d03b-4a1e-9500-08bb08dc64f3] Running
	I1225 13:36:55.521380 1484104 system_pods.go:89] "kube-proxy-fpk9s" [17d80ffc-e149-4449-aec9-9d90a2fda282] Running
	I1225 13:36:55.521387 1484104 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-344803" [795b56ad-2ee1-45ef-8c7b-1b878be6b0d7] Running
	I1225 13:36:55.521397 1484104 system_pods.go:89] "metrics-server-57f55c9bc5-slv7p" [a51c534d-e6d8-48b9-852f-caf598c8853a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:36:55.521409 1484104 system_pods.go:89] "storage-provisioner" [4bee5e6e-1252-4b3d-8d6c-73515d8567e4] Running
	I1225 13:36:55.521421 1484104 system_pods.go:126] duration metric: took 7.294824ms to wait for k8s-apps to be running ...
	I1225 13:36:55.521433 1484104 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:36:55.521492 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:36:55.540217 1484104 system_svc.go:56] duration metric: took 18.766893ms WaitForService to wait for kubelet.
	I1225 13:36:55.540248 1484104 kubeadm.go:581] duration metric: took 4m40.021246946s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:36:55.540271 1484104 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:36:55.544519 1484104 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:36:55.544685 1484104 node_conditions.go:123] node cpu capacity is 2
	I1225 13:36:55.544742 1484104 node_conditions.go:105] duration metric: took 4.463666ms to run NodePressure ...
	I1225 13:36:55.544783 1484104 start.go:228] waiting for startup goroutines ...
	I1225 13:36:55.544795 1484104 start.go:233] waiting for cluster config update ...
	I1225 13:36:55.544810 1484104 start.go:242] writing updated cluster config ...
	I1225 13:36:55.545268 1484104 ssh_runner.go:195] Run: rm -f paused
	I1225 13:36:55.607984 1484104 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 13:36:55.609993 1484104 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-344803" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2023-12-25 13:27:08 UTC, ends at Mon 2023-12-25 13:37:04 UTC. --
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.888540417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eee04693d74189924b9622b39b08d0c1a82a39417920b95311f7e60595834201,PodSandboxId:f04ef7bd6f0a22b979f413b3c535fd53468c870473d463843ef95793417074ce,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510875378231620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: af0877b6-43de-4c64-b5ac-279fa3325551,},Annotations:map[string]string{io.kubernetes.container.hash: e9a12b27,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47cff327955c591f8e8f9d644ad6987fa073012ed055a8b8006a72ffb08c2be,PodSandboxId:ce277e6ba47cd520efeef710adb4892bcd0e2aeb73099383b9a829fbb0616f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1703510874302110639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mk9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487388f-a7b7-401e-9ce3-06fac16ddd47,},Annotations:map[string]string{io.kubernetes.container.hash: a0fe198d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf29569278accacdc63587055c7c4248270d1bf393c40fa449ac4b96f40bb0f1,PodSandboxId:b230f817f43edda50e77e7d96936601f75698b17da85fdc3672e565534e57b1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510873813284140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0d6c87f1-93ae-479b-ac0e-4623e326afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 9f8f673d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910a2a6af295b1b01f52fe18a975c267d9d105bf2eed5c4debe0d0731281c5ff,PodSandboxId:01599dd503c13b19393282a7db9edd5cbc647016900b78ba151dc284b2624654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1703510872533183297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vw9lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b7377f2-3ae6-4003-977d
-4eb3c7cd11f0,},Annotations:map[string]string{io.kubernetes.container.hash: e36b7973,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2abf03e37aac490974346ac98df0d557a7f99b8f18fa76dd29a068b9fd7fb6,PodSandboxId:da2644db835d20c701a5d61dbe793394c150b0fb9c40314bad7a93372ec157a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1703510864666002805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd98fe94865b5b85093069a662706570,},Annotations:map[string]string{io.ku
bernetes.container.hash: 107160ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af8d6cd59ab945dd2f728519f0a38639469b790ff75269c71e14d6e55212410,PodSandboxId:aa9954da2cb2ab43232ca5d8c0ffde30b97da93dd1114f70f858657cbd6d1909,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1703510863315990861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1a7d0e2b22b5770db35501a52f89ed,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2964ec56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ad453cbfd10d811941f7f5330a805c3db1e6551a186cf7fb6786d13851d6fc,PodSandboxId:f5a9d9ee3e96527f1bcfd109cefb4fd767a6091bd77b2e4cf05f05c85de07f20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1703510863194174320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.has
h: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fccd1ab3c39fefcb749e16ffc8605e841e7056f8171b0388a88d6f13ffcff2,PodSandboxId:4119b1ccf722cbd12566133e3817130461a3fd078c4734285f1fb190d73e3e5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1703510863175379403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io
.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=00821f3b-2206-470d-8445-e932b1784d33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.934657341Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a374c13c-7e08-4874-bf02-55b37b6689b9 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.934808728Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a374c13c-7e08-4874-bf02-55b37b6689b9 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.936272981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9a395740-aaf8-409b-9215-a7a0ae0da4c6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.936973546Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511423936952594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=9a395740-aaf8-409b-9215-a7a0ae0da4c6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.938010775Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c0255c42-208b-4119-8505-6f72529b4b87 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.938409804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c0255c42-208b-4119-8505-6f72529b4b87 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.938645606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eee04693d74189924b9622b39b08d0c1a82a39417920b95311f7e60595834201,PodSandboxId:f04ef7bd6f0a22b979f413b3c535fd53468c870473d463843ef95793417074ce,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510875378231620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: af0877b6-43de-4c64-b5ac-279fa3325551,},Annotations:map[string]string{io.kubernetes.container.hash: e9a12b27,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47cff327955c591f8e8f9d644ad6987fa073012ed055a8b8006a72ffb08c2be,PodSandboxId:ce277e6ba47cd520efeef710adb4892bcd0e2aeb73099383b9a829fbb0616f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1703510874302110639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mk9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487388f-a7b7-401e-9ce3-06fac16ddd47,},Annotations:map[string]string{io.kubernetes.container.hash: a0fe198d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf29569278accacdc63587055c7c4248270d1bf393c40fa449ac4b96f40bb0f1,PodSandboxId:b230f817f43edda50e77e7d96936601f75698b17da85fdc3672e565534e57b1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510873813284140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0d6c87f1-93ae-479b-ac0e-4623e326afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 9f8f673d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910a2a6af295b1b01f52fe18a975c267d9d105bf2eed5c4debe0d0731281c5ff,PodSandboxId:01599dd503c13b19393282a7db9edd5cbc647016900b78ba151dc284b2624654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1703510872533183297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vw9lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b7377f2-3ae6-4003-977d
-4eb3c7cd11f0,},Annotations:map[string]string{io.kubernetes.container.hash: e36b7973,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2abf03e37aac490974346ac98df0d557a7f99b8f18fa76dd29a068b9fd7fb6,PodSandboxId:da2644db835d20c701a5d61dbe793394c150b0fb9c40314bad7a93372ec157a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1703510864666002805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd98fe94865b5b85093069a662706570,},Annotations:map[string]string{io.ku
bernetes.container.hash: 107160ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af8d6cd59ab945dd2f728519f0a38639469b790ff75269c71e14d6e55212410,PodSandboxId:aa9954da2cb2ab43232ca5d8c0ffde30b97da93dd1114f70f858657cbd6d1909,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1703510863315990861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1a7d0e2b22b5770db35501a52f89ed,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2964ec56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ad453cbfd10d811941f7f5330a805c3db1e6551a186cf7fb6786d13851d6fc,PodSandboxId:f5a9d9ee3e96527f1bcfd109cefb4fd767a6091bd77b2e4cf05f05c85de07f20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1703510863194174320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.has
h: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fccd1ab3c39fefcb749e16ffc8605e841e7056f8171b0388a88d6f13ffcff2,PodSandboxId:4119b1ccf722cbd12566133e3817130461a3fd078c4734285f1fb190d73e3e5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1703510863175379403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io
.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c0255c42-208b-4119-8505-6f72529b4b87 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.979096555Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bb82bb9c-ddd0-4c3a-bbb4-c6e842ab58d0 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.979389707Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:78c5ea1084bedac03635aca38473acb061bef0ed8071b64358bfa170d6a82600,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-2ppzp,Uid:8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510888195631993,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-2ppzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T13:28:06.942437615Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f04ef7bd6f0a22b979f413b3c535fd53468c870473d463843ef95793417074ce,Metadata:&PodSandboxMetadata{Name:busybox,Uid:af0877b6-43de-4c64-b5ac-279fa3325551,Namespace
:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510874014964557,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: af0877b6-43de-4c64-b5ac-279fa3325551,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T13:27:49.931148958Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce277e6ba47cd520efeef710adb4892bcd0e2aeb73099383b9a829fbb0616f7a,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-mk9jx,Uid:7487388f-a7b7-401e-9ce3-06fac16ddd47,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510873999484971,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-mk9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487388f-a7b7-401e-9ce3-06fac16ddd47,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T13:27
:49.931150323Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b230f817f43edda50e77e7d96936601f75698b17da85fdc3672e565534e57b1c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0d6c87f1-93ae-479b-ac0e-4623e326afb6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510872396952351,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d6c87f1-93ae-479b-ac0e-4623e326afb6,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/
k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-25T13:27:49.931147477Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01599dd503c13b19393282a7db9edd5cbc647016900b78ba151dc284b2624654,Metadata:&PodSandboxMetadata{Name:kube-proxy-vw9lf,Uid:2b7377f2-3ae6-4003-977d-4eb3c7cd11f0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510872087677421,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vw9lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b7377f2-3ae6-4003-977d-4eb3c7cd11f0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernete
s.io/config.seen: 2023-12-25T13:27:49.931144409Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:da2644db835d20c701a5d61dbe793394c150b0fb9c40314bad7a93372ec157a5,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-198979,Uid:bd98fe94865b5b85093069a662706570,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510862478702377,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd98fe94865b5b85093069a662706570,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bd98fe94865b5b85093069a662706570,kubernetes.io/config.seen: 2023-12-25T13:27:41.937419453Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aa9954da2cb2ab43232ca5d8c0ffde30b97da93dd1114f70f858657cbd6d1909,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-198979,Uid:4e1a7d0e2b22b5770db35501a52f89ed,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510862476278180,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1a7d0e2b22b5770db35501a52f89ed,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4e1a7d0e2b22b5770db35501a52f89ed,kubernetes.io/config.seen: 2023-12-25T13:27:41.937425106Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4119b1ccf722cbd12566133e3817130461a3fd078c4734285f1fb190d73e3e5a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-198979,Uid:b39706a67360d65bfa3cf2560791efe9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510862458252361,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b39706a67360d65bfa3cf2560791efe9,kubernetes.io/config.seen: 2023-12-25T13:27:41.937426599Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f5a9d9ee3e96527f1bcfd109cefb4fd767a6091bd77b2e4cf05f05c85de07f20,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-198979,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510862454219980,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-12-25T13:27:41.93743128Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file=
"go-grpc-middleware/chain.go:25" id=bb82bb9c-ddd0-4c3a-bbb4-c6e842ab58d0 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.980529095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=52a5da61-8245-4801-a0a2-d0a35ad4fbec name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.980603202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=52a5da61-8245-4801-a0a2-d0a35ad4fbec name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.980926543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eee04693d74189924b9622b39b08d0c1a82a39417920b95311f7e60595834201,PodSandboxId:f04ef7bd6f0a22b979f413b3c535fd53468c870473d463843ef95793417074ce,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510875378231620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: af0877b6-43de-4c64-b5ac-279fa3325551,},Annotations:map[string]string{io.kubernetes.container.hash: e9a12b27,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47cff327955c591f8e8f9d644ad6987fa073012ed055a8b8006a72ffb08c2be,PodSandboxId:ce277e6ba47cd520efeef710adb4892bcd0e2aeb73099383b9a829fbb0616f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1703510874302110639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mk9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487388f-a7b7-401e-9ce3-06fac16ddd47,},Annotations:map[string]string{io.kubernetes.container.hash: a0fe198d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf29569278accacdc63587055c7c4248270d1bf393c40fa449ac4b96f40bb0f1,PodSandboxId:b230f817f43edda50e77e7d96936601f75698b17da85fdc3672e565534e57b1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510873813284140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0d6c87f1-93ae-479b-ac0e-4623e326afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 9f8f673d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910a2a6af295b1b01f52fe18a975c267d9d105bf2eed5c4debe0d0731281c5ff,PodSandboxId:01599dd503c13b19393282a7db9edd5cbc647016900b78ba151dc284b2624654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1703510872533183297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vw9lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b7377f2-3ae6-4003-977d
-4eb3c7cd11f0,},Annotations:map[string]string{io.kubernetes.container.hash: e36b7973,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2abf03e37aac490974346ac98df0d557a7f99b8f18fa76dd29a068b9fd7fb6,PodSandboxId:da2644db835d20c701a5d61dbe793394c150b0fb9c40314bad7a93372ec157a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1703510864666002805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd98fe94865b5b85093069a662706570,},Annotations:map[string]string{io.ku
bernetes.container.hash: 107160ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af8d6cd59ab945dd2f728519f0a38639469b790ff75269c71e14d6e55212410,PodSandboxId:aa9954da2cb2ab43232ca5d8c0ffde30b97da93dd1114f70f858657cbd6d1909,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1703510863315990861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1a7d0e2b22b5770db35501a52f89ed,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2964ec56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ad453cbfd10d811941f7f5330a805c3db1e6551a186cf7fb6786d13851d6fc,PodSandboxId:f5a9d9ee3e96527f1bcfd109cefb4fd767a6091bd77b2e4cf05f05c85de07f20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1703510863194174320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.has
h: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fccd1ab3c39fefcb749e16ffc8605e841e7056f8171b0388a88d6f13ffcff2,PodSandboxId:4119b1ccf722cbd12566133e3817130461a3fd078c4734285f1fb190d73e3e5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1703510863175379403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io
.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=52a5da61-8245-4801-a0a2-d0a35ad4fbec name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.981881248Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6869548e-c22b-4328-be9e-a4dfc98de02a name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.982124097Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:78c5ea1084bedac03635aca38473acb061bef0ed8071b64358bfa170d6a82600,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-2ppzp,Uid:8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510888195631993,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-2ppzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T13:28:06.942437615Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f04ef7bd6f0a22b979f413b3c535fd53468c870473d463843ef95793417074ce,Metadata:&PodSandboxMetadata{Name:busybox,Uid:af0877b6-43de-4c64-b5ac-279fa3325551,Namespace
:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510874014964557,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: af0877b6-43de-4c64-b5ac-279fa3325551,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T13:27:49.931148958Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce277e6ba47cd520efeef710adb4892bcd0e2aeb73099383b9a829fbb0616f7a,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-mk9jx,Uid:7487388f-a7b7-401e-9ce3-06fac16ddd47,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510873999484971,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-mk9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487388f-a7b7-401e-9ce3-06fac16ddd47,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T13:27
:49.931150323Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b230f817f43edda50e77e7d96936601f75698b17da85fdc3672e565534e57b1c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0d6c87f1-93ae-479b-ac0e-4623e326afb6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510872396952351,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d6c87f1-93ae-479b-ac0e-4623e326afb6,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/
k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-25T13:27:49.931147477Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01599dd503c13b19393282a7db9edd5cbc647016900b78ba151dc284b2624654,Metadata:&PodSandboxMetadata{Name:kube-proxy-vw9lf,Uid:2b7377f2-3ae6-4003-977d-4eb3c7cd11f0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510872087677421,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vw9lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b7377f2-3ae6-4003-977d-4eb3c7cd11f0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernete
s.io/config.seen: 2023-12-25T13:27:49.931144409Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:da2644db835d20c701a5d61dbe793394c150b0fb9c40314bad7a93372ec157a5,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-198979,Uid:bd98fe94865b5b85093069a662706570,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510862478702377,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd98fe94865b5b85093069a662706570,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bd98fe94865b5b85093069a662706570,kubernetes.io/config.seen: 2023-12-25T13:27:41.937419453Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aa9954da2cb2ab43232ca5d8c0ffde30b97da93dd1114f70f858657cbd6d1909,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-198979,Uid:4e1a7d0e2b22b5770db35501a52f89ed,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510862476278180,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1a7d0e2b22b5770db35501a52f89ed,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4e1a7d0e2b22b5770db35501a52f89ed,kubernetes.io/config.seen: 2023-12-25T13:27:41.937425106Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4119b1ccf722cbd12566133e3817130461a3fd078c4734285f1fb190d73e3e5a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-198979,Uid:b39706a67360d65bfa3cf2560791efe9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510862458252361,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b39706a67360d65bfa3cf2560791efe9,kubernetes.io/config.seen: 2023-12-25T13:27:41.937426599Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f5a9d9ee3e96527f1bcfd109cefb4fd767a6091bd77b2e4cf05f05c85de07f20,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-198979,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703510862454219980,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-12-25T13:27:41.93743128Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file=
"go-grpc-middleware/chain.go:25" id=6869548e-c22b-4328-be9e-a4dfc98de02a name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.983426008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e6f27a7b-4f88-4467-93b0-2fea05149d9d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.983487650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e6f27a7b-4f88-4467-93b0-2fea05149d9d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 25 13:37:03 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:03.984499038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eee04693d74189924b9622b39b08d0c1a82a39417920b95311f7e60595834201,PodSandboxId:f04ef7bd6f0a22b979f413b3c535fd53468c870473d463843ef95793417074ce,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510875378231620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: af0877b6-43de-4c64-b5ac-279fa3325551,},Annotations:map[string]string{io.kubernetes.container.hash: e9a12b27,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47cff327955c591f8e8f9d644ad6987fa073012ed055a8b8006a72ffb08c2be,PodSandboxId:ce277e6ba47cd520efeef710adb4892bcd0e2aeb73099383b9a829fbb0616f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1703510874302110639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mk9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487388f-a7b7-401e-9ce3-06fac16ddd47,},Annotations:map[string]string{io.kubernetes.container.hash: a0fe198d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf29569278accacdc63587055c7c4248270d1bf393c40fa449ac4b96f40bb0f1,PodSandboxId:b230f817f43edda50e77e7d96936601f75698b17da85fdc3672e565534e57b1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510873813284140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0d6c87f1-93ae-479b-ac0e-4623e326afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 9f8f673d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910a2a6af295b1b01f52fe18a975c267d9d105bf2eed5c4debe0d0731281c5ff,PodSandboxId:01599dd503c13b19393282a7db9edd5cbc647016900b78ba151dc284b2624654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1703510872533183297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vw9lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b7377f2-3ae6-4003-977d
-4eb3c7cd11f0,},Annotations:map[string]string{io.kubernetes.container.hash: e36b7973,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2abf03e37aac490974346ac98df0d557a7f99b8f18fa76dd29a068b9fd7fb6,PodSandboxId:da2644db835d20c701a5d61dbe793394c150b0fb9c40314bad7a93372ec157a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1703510864666002805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd98fe94865b5b85093069a662706570,},Annotations:map[string]string{io.ku
bernetes.container.hash: 107160ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af8d6cd59ab945dd2f728519f0a38639469b790ff75269c71e14d6e55212410,PodSandboxId:aa9954da2cb2ab43232ca5d8c0ffde30b97da93dd1114f70f858657cbd6d1909,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1703510863315990861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1a7d0e2b22b5770db35501a52f89ed,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2964ec56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ad453cbfd10d811941f7f5330a805c3db1e6551a186cf7fb6786d13851d6fc,PodSandboxId:f5a9d9ee3e96527f1bcfd109cefb4fd767a6091bd77b2e4cf05f05c85de07f20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1703510863194174320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.has
h: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fccd1ab3c39fefcb749e16ffc8605e841e7056f8171b0388a88d6f13ffcff2,PodSandboxId:4119b1ccf722cbd12566133e3817130461a3fd078c4734285f1fb190d73e3e5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1703510863175379403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io
.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e6f27a7b-4f88-4467-93b0-2fea05149d9d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 25 13:37:04 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:04.008104205Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=034a2be6-6805-4220-a076-7d566314a2d3 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:37:04 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:04.008481903Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=034a2be6-6805-4220-a076-7d566314a2d3 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:37:04 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:04.009957133Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=42da8473-f058-482e-8abe-d41f14061b12 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:37:04 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:04.010318365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511424010305907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=42da8473-f058-482e-8abe-d41f14061b12 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:37:04 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:04.011008524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=80986da0-dcc0-40d0-a59f-c98973b8d918 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:37:04 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:04.011164158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=80986da0-dcc0-40d0-a59f-c98973b8d918 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:37:04 old-k8s-version-198979 crio[708]: time="2023-12-25 13:37:04.011550533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eee04693d74189924b9622b39b08d0c1a82a39417920b95311f7e60595834201,PodSandboxId:f04ef7bd6f0a22b979f413b3c535fd53468c870473d463843ef95793417074ce,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510875378231620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: af0877b6-43de-4c64-b5ac-279fa3325551,},Annotations:map[string]string{io.kubernetes.container.hash: e9a12b27,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47cff327955c591f8e8f9d644ad6987fa073012ed055a8b8006a72ffb08c2be,PodSandboxId:ce277e6ba47cd520efeef710adb4892bcd0e2aeb73099383b9a829fbb0616f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1703510874302110639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mk9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487388f-a7b7-401e-9ce3-06fac16ddd47,},Annotations:map[string]string{io.kubernetes.container.hash: a0fe198d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf29569278accacdc63587055c7c4248270d1bf393c40fa449ac4b96f40bb0f1,PodSandboxId:b230f817f43edda50e77e7d96936601f75698b17da85fdc3672e565534e57b1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510873813284140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0d6c87f1-93ae-479b-ac0e-4623e326afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 9f8f673d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910a2a6af295b1b01f52fe18a975c267d9d105bf2eed5c4debe0d0731281c5ff,PodSandboxId:01599dd503c13b19393282a7db9edd5cbc647016900b78ba151dc284b2624654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1703510872533183297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vw9lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b7377f2-3ae6-4003-977d
-4eb3c7cd11f0,},Annotations:map[string]string{io.kubernetes.container.hash: e36b7973,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2abf03e37aac490974346ac98df0d557a7f99b8f18fa76dd29a068b9fd7fb6,PodSandboxId:da2644db835d20c701a5d61dbe793394c150b0fb9c40314bad7a93372ec157a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1703510864666002805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd98fe94865b5b85093069a662706570,},Annotations:map[string]string{io.ku
bernetes.container.hash: 107160ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af8d6cd59ab945dd2f728519f0a38639469b790ff75269c71e14d6e55212410,PodSandboxId:aa9954da2cb2ab43232ca5d8c0ffde30b97da93dd1114f70f858657cbd6d1909,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1703510863315990861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1a7d0e2b22b5770db35501a52f89ed,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2964ec56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ad453cbfd10d811941f7f5330a805c3db1e6551a186cf7fb6786d13851d6fc,PodSandboxId:f5a9d9ee3e96527f1bcfd109cefb4fd767a6091bd77b2e4cf05f05c85de07f20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1703510863194174320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.has
h: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fccd1ab3c39fefcb749e16ffc8605e841e7056f8171b0388a88d6f13ffcff2,PodSandboxId:4119b1ccf722cbd12566133e3817130461a3fd078c4734285f1fb190d73e3e5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1703510863175379403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io
.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=80986da0-dcc0-40d0-a59f-c98973b8d918 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	eee04693d7418       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Running             busybox                   0                   f04ef7bd6f0a2       busybox
	b47cff327955c       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      9 minutes ago       Running             coredns                   0                   ce277e6ba47cd       coredns-5644d7b6d9-mk9jx
	cf29569278acc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Running             storage-provisioner       0                   b230f817f43ed       storage-provisioner
	910a2a6af295b       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      9 minutes ago       Running             kube-proxy                0                   01599dd503c13       kube-proxy-vw9lf
	8a2abf03e37aa       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      9 minutes ago       Running             etcd                      0                   da2644db835d2       etcd-old-k8s-version-198979
	0af8d6cd59ab9       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      9 minutes ago       Running             kube-apiserver            0                   aa9954da2cb2a       kube-apiserver-old-k8s-version-198979
	e4ad453cbfd10       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      9 minutes ago       Running             kube-scheduler            0                   f5a9d9ee3e965       kube-scheduler-old-k8s-version-198979
	90fccd1ab3c39       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      9 minutes ago       Running             kube-controller-manager   0                   4119b1ccf722c       kube-controller-manager-old-k8s-version-198979
	
	
	==> coredns [b47cff327955c591f8e8f9d644ad6987fa073012ed055a8b8006a72ffb08c2be] <==
	.:53
	2023-12-25T13:17:05.406Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2023-12-25T13:17:05.406Z [INFO] CoreDNS-1.6.2
	2023-12-25T13:17:05.406Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-12-25T13:17:05.417Z [INFO] 127.0.0.1:50006 - 47573 "HINFO IN 5597062525656395122.292789402761948928. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010323893s
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2023-12-25T13:27:54.726Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2023-12-25T13:27:54.726Z [INFO] CoreDNS-1.6.2
	2023-12-25T13:27:54.726Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-12-25T13:27:55.736Z [INFO] 127.0.0.1:47335 - 55245 "HINFO IN 6994226206877751198.4386127104992780867. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009589657s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-198979
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-198979
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=old-k8s-version-198979
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_25T13_16_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 13:16:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 13:36:20 +0000   Mon, 25 Dec 2023 13:16:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 13:36:20 +0000   Mon, 25 Dec 2023 13:16:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 13:36:20 +0000   Mon, 25 Dec 2023 13:16:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 13:36:20 +0000   Mon, 25 Dec 2023 13:28:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    old-k8s-version-198979
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 754d284c191d40dc9bd29b299bcd741b
	 System UUID:                754d284c-191d-40dc-9bd2-9b299bcd741b
	 Boot ID:                    642f28bc-a4e8-415d-9aee-5f3fcb175a25
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                coredns-5644d7b6d9-mk9jx                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                etcd-old-k8s-version-198979                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                kube-apiserver-old-k8s-version-198979             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                kube-controller-manager-old-k8s-version-198979    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                kube-proxy-vw9lf                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                kube-scheduler-old-k8s-version-198979             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                metrics-server-74d5856cc6-2ppzp                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         8m58s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)      kubelet, old-k8s-version-198979     Node old-k8s-version-198979 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)      kubelet, old-k8s-version-198979     Node old-k8s-version-198979 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)      kubelet, old-k8s-version-198979     Node old-k8s-version-198979 status is now: NodeHasSufficientPID
	  Normal  Starting                 20m                    kube-proxy, old-k8s-version-198979  Starting kube-proxy.
	  Normal  Starting                 9m23s                  kubelet, old-k8s-version-198979     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet, old-k8s-version-198979     Node old-k8s-version-198979 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x7 over 9m22s)  kubelet, old-k8s-version-198979     Node old-k8s-version-198979 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x8 over 9m22s)  kubelet, old-k8s-version-198979     Node old-k8s-version-198979 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet, old-k8s-version-198979     Updated Node Allocatable limit across pods
	  Normal  Starting                 9m11s                  kube-proxy, old-k8s-version-198979  Starting kube-proxy.
	
	
	==> dmesg <==
	[Dec25 13:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec25 13:27] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.668638] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.144668] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.568037] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.541399] systemd-fstab-generator[631]: Ignoring "noauto" for root device
	[  +0.119358] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.168559] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.126129] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.264794] systemd-fstab-generator[691]: Ignoring "noauto" for root device
	[ +20.249626] systemd-fstab-generator[1026]: Ignoring "noauto" for root device
	[  +0.467144] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec25 13:28] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [8a2abf03e37aac490974346ac98df0d557a7f99b8f18fa76dd29a068b9fd7fb6] <==
	2023-12-25 13:27:44.783055 I | etcdserver: heartbeat = 100ms
	2023-12-25 13:27:44.783058 I | etcdserver: election = 1000ms
	2023-12-25 13:27:44.783062 I | etcdserver: snapshot count = 10000
	2023-12-25 13:27:44.783072 I | etcdserver: advertise client URLs = https://192.168.39.186:2379
	2023-12-25 13:27:44.791840 I | etcdserver: restarting member 1bfd5d64eb00b2d5 in cluster 7d06a36b1777ee5c at commit index 525
	2023-12-25 13:27:44.792015 I | raft: 1bfd5d64eb00b2d5 became follower at term 2
	2023-12-25 13:27:44.792047 I | raft: newRaft 1bfd5d64eb00b2d5 [peers: [], term: 2, commit: 525, applied: 0, lastindex: 525, lastterm: 2]
	2023-12-25 13:27:44.803880 W | auth: simple token is not cryptographically signed
	2023-12-25 13:27:44.806670 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-25 13:27:44.808452 I | etcdserver/membership: added member 1bfd5d64eb00b2d5 [https://192.168.39.186:2380] to cluster 7d06a36b1777ee5c
	2023-12-25 13:27:44.808545 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-25 13:27:44.808597 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-25 13:27:44.809081 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-25 13:27:44.809264 I | embed: listening for metrics on http://192.168.39.186:2381
	2023-12-25 13:27:44.809729 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-25 13:27:46.592440 I | raft: 1bfd5d64eb00b2d5 is starting a new election at term 2
	2023-12-25 13:27:46.592506 I | raft: 1bfd5d64eb00b2d5 became candidate at term 3
	2023-12-25 13:27:46.592520 I | raft: 1bfd5d64eb00b2d5 received MsgVoteResp from 1bfd5d64eb00b2d5 at term 3
	2023-12-25 13:27:46.592530 I | raft: 1bfd5d64eb00b2d5 became leader at term 3
	2023-12-25 13:27:46.592536 I | raft: raft.node: 1bfd5d64eb00b2d5 elected leader 1bfd5d64eb00b2d5 at term 3
	2023-12-25 13:27:46.594242 I | etcdserver: published {Name:old-k8s-version-198979 ClientURLs:[https://192.168.39.186:2379]} to cluster 7d06a36b1777ee5c
	2023-12-25 13:27:46.594453 I | embed: ready to serve client requests
	2023-12-25 13:27:46.596100 I | embed: serving client requests on 192.168.39.186:2379
	2023-12-25 13:27:46.596421 I | embed: ready to serve client requests
	2023-12-25 13:27:46.600201 I | embed: serving client requests on 127.0.0.1:2379
	
	
	==> kernel <==
	 13:37:04 up 10 min,  0 users,  load average: 0.58, 0.29, 0.15
	Linux old-k8s-version-198979 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [0af8d6cd59ab945dd2f728519f0a38639469b790ff75269c71e14d6e55212410] <==
	I1225 13:28:51.790612       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1225 13:28:51.790710       1 handler_proxy.go:99] no RequestInfo found in the context
	E1225 13:28:51.790822       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:28:51.790832       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:30:51.791282       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1225 13:30:51.791431       1 handler_proxy.go:99] no RequestInfo found in the context
	E1225 13:30:51.791511       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:30:51.791536       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:32:50.954642       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1225 13:32:50.955130       1 handler_proxy.go:99] no RequestInfo found in the context
	E1225 13:32:50.955273       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:32:50.955314       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:33:50.955635       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1225 13:33:50.955958       1 handler_proxy.go:99] no RequestInfo found in the context
	E1225 13:33:50.956123       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:33:50.956170       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:35:50.956610       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1225 13:35:50.957115       1 handler_proxy.go:99] no RequestInfo found in the context
	E1225 13:35:50.957238       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:35:50.957277       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [90fccd1ab3c39fefcb749e16ffc8605e841e7056f8171b0388a88d6f13ffcff2] <==
	E1225 13:30:38.631975       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:30:49.021191       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:31:08.884667       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:31:21.023421       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:31:39.137017       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:31:53.025625       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:32:09.389594       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:32:25.028098       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:32:39.642924       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:32:57.030039       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:33:09.896135       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:33:29.032610       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:33:40.148154       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:34:01.035251       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:34:10.400105       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:34:33.037364       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:34:40.652876       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:35:05.039498       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:35:10.904987       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:35:37.042070       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:35:41.157189       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:36:09.044458       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:36:11.409352       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:36:41.046877       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:36:41.661652       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [910a2a6af295b1b01f52fe18a975c267d9d105bf2eed5c4debe0d0731281c5ff] <==
	W1225 13:17:04.361005       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1225 13:17:04.374480       1 node.go:135] Successfully retrieved node IP: 192.168.39.186
	I1225 13:17:04.374593       1 server_others.go:149] Using iptables Proxier.
	I1225 13:17:04.375513       1 server.go:529] Version: v1.16.0
	I1225 13:17:04.377030       1 config.go:313] Starting service config controller
	I1225 13:17:04.377157       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1225 13:17:04.377669       1 config.go:131] Starting endpoints config controller
	I1225 13:17:04.377729       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1225 13:17:04.478716       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1225 13:17:04.478904       1 shared_informer.go:204] Caches are synced for service config 
	W1225 13:27:53.145014       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1225 13:27:53.299225       1 node.go:135] Successfully retrieved node IP: 192.168.39.186
	I1225 13:27:53.299375       1 server_others.go:149] Using iptables Proxier.
	I1225 13:27:53.566891       1 server.go:529] Version: v1.16.0
	I1225 13:27:53.574132       1 config.go:313] Starting service config controller
	I1225 13:27:53.574208       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1225 13:27:53.574269       1 config.go:131] Starting endpoints config controller
	I1225 13:27:53.574282       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1225 13:27:53.677873       1 shared_informer.go:204] Caches are synced for service config 
	I1225 13:27:53.678132       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [e4ad453cbfd10d811941f7f5330a805c3db1e6551a186cf7fb6786d13851d6fc] <==
	E1225 13:16:42.654916       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1225 13:16:43.621845       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1225 13:16:43.637796       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1225 13:16:43.642275       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1225 13:16:43.643110       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1225 13:16:43.645101       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1225 13:16:43.646333       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1225 13:16:43.649605       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1225 13:16:43.649979       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1225 13:16:43.657201       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1225 13:16:43.657770       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1225 13:16:43.659806       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1225 13:17:02.500668       1 factory.go:585] pod is already present in the activeQ
	E1225 13:17:02.764901       1 factory.go:585] pod is already present in the activeQ
	I1225 13:27:44.079406       1 serving.go:319] Generated self-signed cert in-memory
	W1225 13:27:49.976727       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 13:27:49.977885       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 13:27:49.977975       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 13:27:49.977983       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 13:27:49.992426       1 server.go:143] Version: v1.16.0
	I1225 13:27:49.992529       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W1225 13:27:50.018072       1 authorization.go:47] Authorization is disabled
	W1225 13:27:50.018146       1 authentication.go:79] Authentication is disabled
	I1225 13:27:50.018195       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1225 13:27:50.018570       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	
	==> kubelet <==
	-- Journal begins at Mon 2023-12-25 13:27:08 UTC, ends at Mon 2023-12-25 13:37:04 UTC. --
	Dec 25 13:32:41 old-k8s-version-198979 kubelet[1032]: E1225 13:32:41.972092    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:32:42 old-k8s-version-198979 kubelet[1032]: E1225 13:32:42.063379    1032 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 25 13:32:54 old-k8s-version-198979 kubelet[1032]: E1225 13:32:54.966276    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:33:07 old-k8s-version-198979 kubelet[1032]: E1225 13:33:07.966728    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:33:18 old-k8s-version-198979 kubelet[1032]: E1225 13:33:18.967055    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:33:33 old-k8s-version-198979 kubelet[1032]: E1225 13:33:33.966609    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:33:46 old-k8s-version-198979 kubelet[1032]: E1225 13:33:46.966708    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:33:58 old-k8s-version-198979 kubelet[1032]: E1225 13:33:58.976209    1032 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 25 13:33:58 old-k8s-version-198979 kubelet[1032]: E1225 13:33:58.976352    1032 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 25 13:33:58 old-k8s-version-198979 kubelet[1032]: E1225 13:33:58.976422    1032 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 25 13:33:58 old-k8s-version-198979 kubelet[1032]: E1225 13:33:58.976461    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 25 13:34:11 old-k8s-version-198979 kubelet[1032]: E1225 13:34:11.966914    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:34:22 old-k8s-version-198979 kubelet[1032]: E1225 13:34:22.966035    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:34:35 old-k8s-version-198979 kubelet[1032]: E1225 13:34:35.966250    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:34:48 old-k8s-version-198979 kubelet[1032]: E1225 13:34:48.966232    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:34:59 old-k8s-version-198979 kubelet[1032]: E1225 13:34:59.965884    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:35:14 old-k8s-version-198979 kubelet[1032]: E1225 13:35:14.965652    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:35:26 old-k8s-version-198979 kubelet[1032]: E1225 13:35:26.966579    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:35:39 old-k8s-version-198979 kubelet[1032]: E1225 13:35:39.966334    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:35:54 old-k8s-version-198979 kubelet[1032]: E1225 13:35:54.966240    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:36:09 old-k8s-version-198979 kubelet[1032]: E1225 13:36:09.968733    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:36:22 old-k8s-version-198979 kubelet[1032]: E1225 13:36:22.966226    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:36:34 old-k8s-version-198979 kubelet[1032]: E1225 13:36:34.966363    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:36:49 old-k8s-version-198979 kubelet[1032]: E1225 13:36:49.968038    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:37:02 old-k8s-version-198979 kubelet[1032]: E1225 13:37:02.966391    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [cf29569278accacdc63587055c7c4248270d1bf393c40fa449ac4b96f40bb0f1] <==
	I1225 13:17:04.794227       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 13:17:04.819091       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 13:17:04.820680       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 13:17:04.877918       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 13:17:04.878541       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca479bec-c1b3-4241-884a-1a7f6f0c5197", APIVersion:"v1", ResourceVersion:"377", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-198979_75cdae0c-392d-4512-9725-249e1c30a133 became leader
	I1225 13:17:04.879326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-198979_75cdae0c-392d-4512-9725-249e1c30a133!
	I1225 13:17:04.980602       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-198979_75cdae0c-392d-4512-9725-249e1c30a133!
	I1225 13:27:53.953520       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 13:27:53.978630       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 13:27:53.979970       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 13:28:11.430677       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 13:28:11.431534       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca479bec-c1b3-4241-884a-1a7f6f0c5197", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-198979_2508eee5-db9a-4a7d-959e-f216c8af2c59 became leader
	I1225 13:28:11.431673       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-198979_2508eee5-db9a-4a7d-959e-f216c8af2c59!
	I1225 13:28:11.532560       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-198979_2508eee5-db9a-4a7d-959e-f216c8af2c59!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-198979 -n old-k8s-version-198979
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-198979 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-2ppzp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-198979 describe pod metrics-server-74d5856cc6-2ppzp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-198979 describe pod metrics-server-74d5856cc6-2ppzp: exit status 1 (73.356136ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-2ppzp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-198979 describe pod metrics-server-74d5856cc6-2ppzp: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1225 13:31:26.363355 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-330063 -n no-preload-330063
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-25 13:40:17.40068866 +0000 UTC m=+5042.019165681
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-330063 -n no-preload-330063
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-330063 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-330063 logs -n 25: (1.819622133s)
E1225 13:40:19.760352 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-435411                           | kubernetes-upgrade-435411    | jenkins | v1.32.0 | 25 Dec 23 13:17 UTC | 25 Dec 23 13:17 UTC |
	| start   | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:17 UTC | 25 Dec 23 13:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-021022                              | cert-expiration-021022       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC | 25 Dec 23 13:19 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-198979        | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC | 25 Dec 23 13:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-330063             | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-021022                              | cert-expiration-021022       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	| delete  | -p                                                     | disable-driver-mounts-246503 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	|         | disable-driver-mounts-246503                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:22 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-198979             | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-330063                  | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880612            | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-344803  | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880612                 | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-344803       | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC | 25 Dec 23 13:36 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 13:25:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 13:25:09.868120 1484104 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:25:09.868323 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:25:09.868335 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:25:09.868341 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:25:09.868532 1484104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:25:09.869122 1484104 out.go:303] Setting JSON to false
	I1225 13:25:09.870130 1484104 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":158863,"bootTime":1703351847,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 13:25:09.870205 1484104 start.go:138] virtualization: kvm guest
	I1225 13:25:09.872541 1484104 out.go:177] * [default-k8s-diff-port-344803] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 13:25:09.874217 1484104 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 13:25:09.874305 1484104 notify.go:220] Checking for updates...
	I1225 13:25:09.875839 1484104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 13:25:09.877587 1484104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:25:09.879065 1484104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:25:09.880503 1484104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 13:25:09.881819 1484104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 13:25:09.883607 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:25:09.884026 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:09.884110 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:09.899270 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I1225 13:25:09.899708 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:09.900286 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:25:09.900337 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:09.900687 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:09.900912 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:25:09.901190 1484104 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 13:25:09.901525 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:09.901579 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:09.916694 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39937
	I1225 13:25:09.917130 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:09.917673 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:25:09.917704 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:09.918085 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:09.918333 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:25:09.953536 1484104 out.go:177] * Using the kvm2 driver based on existing profile
	I1225 13:25:09.955050 1484104 start.go:298] selected driver: kvm2
	I1225 13:25:09.955065 1484104 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:25:09.955241 1484104 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 13:25:09.955956 1484104 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:25:09.956047 1484104 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 13:25:09.971769 1484104 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 13:25:09.972199 1484104 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 13:25:09.972296 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:25:09.972313 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:25:09.972334 1484104 start_flags.go:323] config:
	{Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-34480
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:25:09.972534 1484104 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:25:09.975411 1484104 out.go:177] * Starting control plane node default-k8s-diff-port-344803 in cluster default-k8s-diff-port-344803
	I1225 13:25:07.694690 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:09.976744 1484104 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:25:09.976814 1484104 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1225 13:25:09.976830 1484104 cache.go:56] Caching tarball of preloaded images
	I1225 13:25:09.976928 1484104 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 13:25:09.976941 1484104 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1225 13:25:09.977353 1484104 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/config.json ...
	I1225 13:25:09.977710 1484104 start.go:365] acquiring machines lock for default-k8s-diff-port-344803: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:25:10.766734 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:16.850681 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:19.922690 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:25.998796 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:29.070780 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:35.150661 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:38.222822 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:44.302734 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:50.379073 1483118 start.go:369] acquired machines lock for "no-preload-330063" in 3m45.211894916s
	I1225 13:25:50.379143 1483118 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:25:50.379155 1483118 fix.go:54] fixHost starting: 
	I1225 13:25:50.379692 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:50.379739 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:50.395491 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I1225 13:25:50.395953 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:50.396490 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:25:50.396512 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:50.396880 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:50.397080 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:25:50.397224 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:25:50.399083 1483118 fix.go:102] recreateIfNeeded on no-preload-330063: state=Stopped err=<nil>
	I1225 13:25:50.399110 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	W1225 13:25:50.399283 1483118 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:25:50.401483 1483118 out.go:177] * Restarting existing kvm2 VM for "no-preload-330063" ...
	I1225 13:25:47.374782 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:50.376505 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:25:50.376562 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:25:50.378895 1482618 machine.go:91] provisioned docker machine in 4m37.578359235s
	I1225 13:25:50.378958 1482618 fix.go:56] fixHost completed within 4m37.60680956s
	I1225 13:25:50.378968 1482618 start.go:83] releasing machines lock for "old-k8s-version-198979", held for 4m37.606859437s
	W1225 13:25:50.378992 1482618 start.go:694] error starting host: provision: host is not running
	W1225 13:25:50.379100 1482618 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1225 13:25:50.379111 1482618 start.go:709] Will try again in 5 seconds ...
	I1225 13:25:50.403280 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Start
	I1225 13:25:50.403507 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring networks are active...
	I1225 13:25:50.404422 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring network default is active
	I1225 13:25:50.404784 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring network mk-no-preload-330063 is active
	I1225 13:25:50.405087 1483118 main.go:141] libmachine: (no-preload-330063) Getting domain xml...
	I1225 13:25:50.405654 1483118 main.go:141] libmachine: (no-preload-330063) Creating domain...
	I1225 13:25:51.676192 1483118 main.go:141] libmachine: (no-preload-330063) Waiting to get IP...
	I1225 13:25:51.677110 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:51.677638 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:51.677715 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:51.677616 1484268 retry.go:31] will retry after 268.018359ms: waiting for machine to come up
	I1225 13:25:51.947683 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:51.948172 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:51.948198 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:51.948118 1484268 retry.go:31] will retry after 278.681465ms: waiting for machine to come up
	I1225 13:25:52.228745 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.229234 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.229265 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.229166 1484268 retry.go:31] will retry after 329.72609ms: waiting for machine to come up
	I1225 13:25:52.560878 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.561315 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.561348 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.561257 1484268 retry.go:31] will retry after 398.659264ms: waiting for machine to come up
	I1225 13:25:52.962067 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.962596 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.962620 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.962548 1484268 retry.go:31] will retry after 474.736894ms: waiting for machine to come up
	I1225 13:25:53.439369 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:53.439834 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:53.439856 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:53.439795 1484268 retry.go:31] will retry after 632.915199ms: waiting for machine to come up
	I1225 13:25:54.074832 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:54.075320 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:54.075349 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:54.075286 1484268 retry.go:31] will retry after 889.605242ms: waiting for machine to come up
	I1225 13:25:54.966323 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:54.966800 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:54.966822 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:54.966757 1484268 retry.go:31] will retry after 1.322678644s: waiting for machine to come up
	I1225 13:25:55.379741 1482618 start.go:365] acquiring machines lock for old-k8s-version-198979: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:25:56.291182 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:56.291604 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:56.291633 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:56.291567 1484268 retry.go:31] will retry after 1.717647471s: waiting for machine to come up
	I1225 13:25:58.011626 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:58.012081 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:58.012116 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:58.012018 1484268 retry.go:31] will retry after 2.29935858s: waiting for machine to come up
	I1225 13:26:00.314446 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:00.314833 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:00.314858 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:00.314806 1484268 retry.go:31] will retry after 2.50206405s: waiting for machine to come up
	I1225 13:26:02.819965 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:02.820458 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:02.820490 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:02.820403 1484268 retry.go:31] will retry after 2.332185519s: waiting for machine to come up
	I1225 13:26:05.155725 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:05.156228 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:05.156263 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:05.156153 1484268 retry.go:31] will retry after 2.769754662s: waiting for machine to come up
	I1225 13:26:07.929629 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:07.930087 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:07.930126 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:07.930040 1484268 retry.go:31] will retry after 4.407133766s: waiting for machine to come up
	I1225 13:26:13.687348 1483946 start.go:369] acquired machines lock for "embed-certs-880612" in 1m27.002513209s
	I1225 13:26:13.687426 1483946 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:13.687436 1483946 fix.go:54] fixHost starting: 
	I1225 13:26:13.687850 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:13.687916 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:13.706054 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I1225 13:26:13.706521 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:13.707063 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:26:13.707087 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:13.707472 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:13.707645 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:13.707832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:26:13.709643 1483946 fix.go:102] recreateIfNeeded on embed-certs-880612: state=Stopped err=<nil>
	I1225 13:26:13.709676 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	W1225 13:26:13.709868 1483946 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:13.712452 1483946 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880612" ...
	I1225 13:26:12.339674 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.340219 1483118 main.go:141] libmachine: (no-preload-330063) Found IP for machine: 192.168.72.232
	I1225 13:26:12.340243 1483118 main.go:141] libmachine: (no-preload-330063) Reserving static IP address...
	I1225 13:26:12.340263 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has current primary IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.340846 1483118 main.go:141] libmachine: (no-preload-330063) Reserved static IP address: 192.168.72.232
	I1225 13:26:12.340896 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "no-preload-330063", mac: "52:54:00:e9:c3:b6", ip: "192.168.72.232"} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.340912 1483118 main.go:141] libmachine: (no-preload-330063) Waiting for SSH to be available...
	I1225 13:26:12.340947 1483118 main.go:141] libmachine: (no-preload-330063) DBG | skip adding static IP to network mk-no-preload-330063 - found existing host DHCP lease matching {name: "no-preload-330063", mac: "52:54:00:e9:c3:b6", ip: "192.168.72.232"}
	I1225 13:26:12.340962 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Getting to WaitForSSH function...
	I1225 13:26:12.343164 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.343417 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.343448 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.343552 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Using SSH client type: external
	I1225 13:26:12.343566 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa (-rw-------)
	I1225 13:26:12.343587 1483118 main.go:141] libmachine: (no-preload-330063) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:12.343595 1483118 main.go:141] libmachine: (no-preload-330063) DBG | About to run SSH command:
	I1225 13:26:12.343603 1483118 main.go:141] libmachine: (no-preload-330063) DBG | exit 0
	I1225 13:26:12.434661 1483118 main.go:141] libmachine: (no-preload-330063) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:12.435101 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetConfigRaw
	I1225 13:26:12.435827 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:12.438300 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.438673 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.438705 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.438870 1483118 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/config.json ...
	I1225 13:26:12.439074 1483118 machine.go:88] provisioning docker machine ...
	I1225 13:26:12.439093 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:12.439335 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.439556 1483118 buildroot.go:166] provisioning hostname "no-preload-330063"
	I1225 13:26:12.439584 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.439789 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.442273 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.442630 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.442661 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.442768 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.442956 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.443127 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.443271 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.443410 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:12.443772 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:12.443787 1483118 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-330063 && echo "no-preload-330063" | sudo tee /etc/hostname
	I1225 13:26:12.581579 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-330063
	
	I1225 13:26:12.581609 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.584621 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.584949 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.584979 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.585252 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.585495 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.585656 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.585790 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.585947 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:12.586320 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:12.586346 1483118 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-330063' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-330063/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-330063' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:12.717139 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:12.717176 1483118 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:12.717197 1483118 buildroot.go:174] setting up certificates
	I1225 13:26:12.717212 1483118 provision.go:83] configureAuth start
	I1225 13:26:12.717229 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.717570 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:12.720469 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.720828 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.720859 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.721016 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.723432 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.723758 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.723815 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.723944 1483118 provision.go:138] copyHostCerts
	I1225 13:26:12.724021 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:12.724035 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:12.724102 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:12.724207 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:12.724215 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:12.724242 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:12.724323 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:12.724330 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:12.724351 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:12.724408 1483118 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.no-preload-330063 san=[192.168.72.232 192.168.72.232 localhost 127.0.0.1 minikube no-preload-330063]
	I1225 13:26:12.929593 1483118 provision.go:172] copyRemoteCerts
	I1225 13:26:12.929665 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:12.929699 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.932608 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.932934 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.932978 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.933144 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.933389 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.933581 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.933738 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.023574 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:13.047157 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1225 13:26:13.070779 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:26:13.094249 1483118 provision.go:86] duration metric: configureAuth took 377.018818ms
	I1225 13:26:13.094284 1483118 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:13.094538 1483118 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:26:13.094665 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.097705 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.098133 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.098179 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.098429 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.098708 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.098888 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.099029 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.099191 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:13.099516 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:13.099534 1483118 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:13.430084 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:13.430138 1483118 machine.go:91] provisioned docker machine in 991.050011ms
	I1225 13:26:13.430150 1483118 start.go:300] post-start starting for "no-preload-330063" (driver="kvm2")
	I1225 13:26:13.430162 1483118 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:13.430185 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.430616 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:13.430661 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.433623 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.434018 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.434054 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.434191 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.434413 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.434586 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.434700 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.523954 1483118 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:13.528009 1483118 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:13.528040 1483118 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:13.528118 1483118 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:13.528214 1483118 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:13.528359 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:13.536826 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:13.561011 1483118 start.go:303] post-start completed in 130.840608ms
	I1225 13:26:13.561046 1483118 fix.go:56] fixHost completed within 23.181891118s
	I1225 13:26:13.561078 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.563717 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.564040 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.564087 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.564268 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.564504 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.564702 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.564812 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.564965 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:13.565326 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:13.565340 1483118 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:13.687155 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510773.671808211
	
	I1225 13:26:13.687181 1483118 fix.go:206] guest clock: 1703510773.671808211
	I1225 13:26:13.687189 1483118 fix.go:219] Guest: 2023-12-25 13:26:13.671808211 +0000 UTC Remote: 2023-12-25 13:26:13.561052282 +0000 UTC m=+248.574935292 (delta=110.755929ms)
	I1225 13:26:13.687209 1483118 fix.go:190] guest clock delta is within tolerance: 110.755929ms
	I1225 13:26:13.687214 1483118 start.go:83] releasing machines lock for "no-preload-330063", held for 23.308100249s
	I1225 13:26:13.687243 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.687561 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:13.690172 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.690572 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.690604 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.690780 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691362 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691534 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691615 1483118 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:13.691670 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.691807 1483118 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:13.691842 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.694593 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.694871 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.694943 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.694967 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.695202 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.695293 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.695319 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.695452 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.695508 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.695613 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.695725 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.695813 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.695899 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.696068 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.812135 1483118 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:13.817944 1483118 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:13.965641 1483118 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:13.973263 1483118 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:13.973433 1483118 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:13.991077 1483118 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:13.991112 1483118 start.go:475] detecting cgroup driver to use...
	I1225 13:26:13.991197 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:14.005649 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:14.018464 1483118 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:14.018540 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:14.031361 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:14.046011 1483118 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:14.152826 1483118 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:14.281488 1483118 docker.go:219] disabling docker service ...
	I1225 13:26:14.281577 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:14.297584 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:14.311896 1483118 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:14.448141 1483118 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:14.583111 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:14.599419 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:14.619831 1483118 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:14.619909 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.631979 1483118 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:14.632065 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.643119 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.655441 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.666525 1483118 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:14.678080 1483118 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:14.687889 1483118 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:14.687957 1483118 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:14.702290 1483118 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:14.712225 1483118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:14.836207 1483118 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:15.019332 1483118 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:15.019424 1483118 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:15.024755 1483118 start.go:543] Will wait 60s for crictl version
	I1225 13:26:15.024844 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.028652 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:15.074415 1483118 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:15.074550 1483118 ssh_runner.go:195] Run: crio --version
	I1225 13:26:15.128559 1483118 ssh_runner.go:195] Run: crio --version
	I1225 13:26:15.178477 1483118 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1225 13:26:13.714488 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Start
	I1225 13:26:13.714708 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring networks are active...
	I1225 13:26:13.715513 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring network default is active
	I1225 13:26:13.715868 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring network mk-embed-certs-880612 is active
	I1225 13:26:13.716279 1483946 main.go:141] libmachine: (embed-certs-880612) Getting domain xml...
	I1225 13:26:13.716905 1483946 main.go:141] libmachine: (embed-certs-880612) Creating domain...
	I1225 13:26:15.049817 1483946 main.go:141] libmachine: (embed-certs-880612) Waiting to get IP...
	I1225 13:26:15.051040 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.051641 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.051756 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.051615 1484395 retry.go:31] will retry after 199.911042ms: waiting for machine to come up
	I1225 13:26:15.253158 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.260582 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.260620 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.260519 1484395 retry.go:31] will retry after 285.022636ms: waiting for machine to come up
	I1225 13:26:15.547290 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.547756 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.547787 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.547692 1484395 retry.go:31] will retry after 327.637369ms: waiting for machine to come up
	I1225 13:26:15.877618 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.878119 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.878153 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.878058 1484395 retry.go:31] will retry after 384.668489ms: waiting for machine to come up
	I1225 13:26:16.264592 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:16.265056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:16.265084 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:16.265005 1484395 retry.go:31] will retry after 468.984683ms: waiting for machine to come up
	I1225 13:26:15.180205 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:15.183372 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:15.183820 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:15.183862 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:15.184054 1483118 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:15.188935 1483118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:15.202790 1483118 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1225 13:26:15.202839 1483118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:15.245267 1483118 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1225 13:26:15.245297 1483118 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1225 13:26:15.245409 1483118 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.245430 1483118 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.245448 1483118 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.245467 1483118 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.245468 1483118 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1225 13:26:15.245534 1483118 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.245447 1483118 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.245404 1483118 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.247839 1483118 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.247850 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.247874 1483118 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.247911 1483118 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.247980 1483118 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1225 13:26:15.247984 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.248043 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.248281 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.404332 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.405729 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.407712 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1225 13:26:15.412419 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.413201 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.413349 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.453117 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.533541 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.536843 1483118 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1225 13:26:15.536896 1483118 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.536950 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.576965 1483118 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1225 13:26:15.577010 1483118 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.577078 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688643 1483118 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1225 13:26:15.688696 1483118 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.688710 1483118 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1225 13:26:15.688750 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688759 1483118 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.688765 1483118 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1225 13:26:15.688794 1483118 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.688813 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688835 1483118 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1225 13:26:15.688847 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688858 1483118 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.688869 1483118 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1225 13:26:15.688890 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688896 1483118 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.688910 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.688921 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688949 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.706288 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.779043 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1225 13:26:15.779170 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.779219 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.779219 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1225 13:26:15.779181 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.779297 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:15.779309 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.779274 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.779439 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:15.779507 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:15.864891 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:15.865017 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:15.884972 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1225 13:26:15.885024 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:15.885035 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1225 13:26:15.885045 1483118 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.885091 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.885094 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:15.885109 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:15.885146 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1225 13:26:15.885167 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1225 13:26:15.885229 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:15.885239 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1225 13:26:15.885273 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1225 13:26:15.898753 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1225 13:26:17.966777 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.08165399s)
	I1225 13:26:17.966822 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1225 13:26:17.966836 1483118 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.081714527s)
	I1225 13:26:17.966865 1483118 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.081735795s)
	I1225 13:26:17.966848 1483118 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:17.966894 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1225 13:26:17.966874 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1225 13:26:17.966936 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:16.736013 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:16.736519 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:16.736553 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:16.736449 1484395 retry.go:31] will retry after 873.004128ms: waiting for machine to come up
	I1225 13:26:17.611675 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:17.612135 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:17.612160 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:17.612085 1484395 retry.go:31] will retry after 1.093577821s: waiting for machine to come up
	I1225 13:26:18.707411 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:18.707936 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:18.707994 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:18.707904 1484395 retry.go:31] will retry after 1.364130049s: waiting for machine to come up
	I1225 13:26:20.074559 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:20.075102 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:20.075135 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:20.075033 1484395 retry.go:31] will retry after 1.740290763s: waiting for machine to come up
	I1225 13:26:21.677915 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.710943608s)
	I1225 13:26:21.677958 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1225 13:26:21.677990 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:21.678050 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:23.630977 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.952875837s)
	I1225 13:26:23.631018 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1225 13:26:23.631051 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:23.631112 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:21.818166 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:21.818695 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:21.818728 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:21.818641 1484395 retry.go:31] will retry after 2.035498479s: waiting for machine to come up
	I1225 13:26:23.856368 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:23.857094 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:23.857120 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:23.856997 1484395 retry.go:31] will retry after 2.331127519s: waiting for machine to come up
	I1225 13:26:26.191862 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:26.192571 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:26.192608 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:26.192513 1484395 retry.go:31] will retry after 3.191632717s: waiting for machine to come up
	I1225 13:26:26.193816 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.56267278s)
	I1225 13:26:26.193849 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1225 13:26:26.193884 1483118 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:26.193951 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:27.342879 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.148892619s)
	I1225 13:26:27.342913 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1225 13:26:27.342948 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:27.343014 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:29.909035 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.565991605s)
	I1225 13:26:29.909080 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1225 13:26:29.909105 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:29.909159 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:29.386007 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:29.386335 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:29.386366 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:29.386294 1484395 retry.go:31] will retry after 3.786228584s: waiting for machine to come up
	I1225 13:26:34.439583 1484104 start.go:369] acquired machines lock for "default-k8s-diff-port-344803" in 1m24.461830001s
	I1225 13:26:34.439666 1484104 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:34.439686 1484104 fix.go:54] fixHost starting: 
	I1225 13:26:34.440164 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:34.440230 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:34.457403 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46037
	I1225 13:26:34.457867 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:34.458351 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:26:34.458422 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:34.458748 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:34.458989 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:34.459176 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:26:34.460975 1484104 fix.go:102] recreateIfNeeded on default-k8s-diff-port-344803: state=Stopped err=<nil>
	I1225 13:26:34.461008 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	W1225 13:26:34.461188 1484104 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:34.463715 1484104 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-344803" ...
	I1225 13:26:34.465022 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Start
	I1225 13:26:34.465274 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring networks are active...
	I1225 13:26:34.466182 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring network default is active
	I1225 13:26:34.466565 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring network mk-default-k8s-diff-port-344803 is active
	I1225 13:26:34.466922 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Getting domain xml...
	I1225 13:26:34.467691 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Creating domain...
	I1225 13:26:32.065345 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.15614946s)
	I1225 13:26:32.065380 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1225 13:26:32.065414 1483118 cache_images.go:123] Successfully loaded all cached images
	I1225 13:26:32.065421 1483118 cache_images.go:92] LoadImages completed in 16.820112197s
	I1225 13:26:32.065498 1483118 ssh_runner.go:195] Run: crio config
	I1225 13:26:32.120989 1483118 cni.go:84] Creating CNI manager for ""
	I1225 13:26:32.121019 1483118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:32.121045 1483118 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:26:32.121063 1483118 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.232 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-330063 NodeName:no-preload-330063 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:26:32.121216 1483118 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-330063"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:26:32.121297 1483118 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-330063 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-330063 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:26:32.121357 1483118 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1225 13:26:32.132569 1483118 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:26:32.132677 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:26:32.142052 1483118 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1225 13:26:32.158590 1483118 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1225 13:26:32.174761 1483118 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1225 13:26:32.191518 1483118 ssh_runner.go:195] Run: grep 192.168.72.232	control-plane.minikube.internal$ /etc/hosts
	I1225 13:26:32.195353 1483118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:32.206845 1483118 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063 for IP: 192.168.72.232
	I1225 13:26:32.206879 1483118 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:32.207098 1483118 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:26:32.207145 1483118 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:26:32.207212 1483118 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.key
	I1225 13:26:32.207270 1483118 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.key.4e9d87c6
	I1225 13:26:32.207323 1483118 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.key
	I1225 13:26:32.207437 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:26:32.207465 1483118 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:26:32.207475 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:26:32.207513 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:26:32.207539 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:26:32.207565 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:26:32.207607 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:32.208427 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:26:32.231142 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:26:32.253335 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:26:32.275165 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:26:32.297762 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:26:32.320671 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:26:32.344125 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:26:32.368066 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:26:32.390688 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:26:32.412849 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:26:32.435445 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:26:32.457687 1483118 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:26:32.474494 1483118 ssh_runner.go:195] Run: openssl version
	I1225 13:26:32.480146 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:26:32.491141 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.495831 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.495902 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.501393 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:26:32.511643 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:26:32.521843 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.526421 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.526514 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.531988 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:26:32.542920 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:26:32.553604 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.558381 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.558478 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.563913 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:26:32.574591 1483118 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:26:32.579046 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:26:32.584821 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:26:32.590781 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:26:32.596456 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:26:32.601978 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:26:32.607981 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:26:32.613785 1483118 kubeadm.go:404] StartCluster: {Name:no-preload-330063 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-330063 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:26:32.613897 1483118 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:26:32.613955 1483118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:32.651782 1483118 cri.go:89] found id: ""
	I1225 13:26:32.651858 1483118 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:26:32.664617 1483118 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:26:32.664648 1483118 kubeadm.go:636] restartCluster start
	I1225 13:26:32.664710 1483118 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:26:32.674727 1483118 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:32.676090 1483118 kubeconfig.go:92] found "no-preload-330063" server: "https://192.168.72.232:8443"
	I1225 13:26:32.679085 1483118 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:26:32.689716 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:32.689824 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:32.702305 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.189843 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:33.189955 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:33.202514 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.689935 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:33.690048 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:33.703975 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:34.190601 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:34.190722 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:34.203987 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:34.690505 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:34.690639 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:34.701704 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.173890 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.174349 1483946 main.go:141] libmachine: (embed-certs-880612) Found IP for machine: 192.168.50.179
	I1225 13:26:33.174372 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has current primary IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.174405 1483946 main.go:141] libmachine: (embed-certs-880612) Reserving static IP address...
	I1225 13:26:33.174805 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "embed-certs-880612", mac: "52:54:00:a2:ab:67", ip: "192.168.50.179"} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.174845 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | skip adding static IP to network mk-embed-certs-880612 - found existing host DHCP lease matching {name: "embed-certs-880612", mac: "52:54:00:a2:ab:67", ip: "192.168.50.179"}
	I1225 13:26:33.174860 1483946 main.go:141] libmachine: (embed-certs-880612) Reserved static IP address: 192.168.50.179
	I1225 13:26:33.174877 1483946 main.go:141] libmachine: (embed-certs-880612) Waiting for SSH to be available...
	I1225 13:26:33.174892 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Getting to WaitForSSH function...
	I1225 13:26:33.177207 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.177579 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.177609 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.177711 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Using SSH client type: external
	I1225 13:26:33.177737 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa (-rw-------)
	I1225 13:26:33.177777 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:33.177790 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | About to run SSH command:
	I1225 13:26:33.177803 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | exit 0
	I1225 13:26:33.274328 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:33.274736 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetConfigRaw
	I1225 13:26:33.275462 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:33.278056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.278429 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.278483 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.278725 1483946 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/config.json ...
	I1225 13:26:33.278982 1483946 machine.go:88] provisioning docker machine ...
	I1225 13:26:33.279013 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:33.279236 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.279448 1483946 buildroot.go:166] provisioning hostname "embed-certs-880612"
	I1225 13:26:33.279468 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.279619 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.281930 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.282277 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.282311 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.282474 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.282704 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.282885 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.283033 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.283194 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.283700 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.283723 1483946 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880612 && echo "embed-certs-880612" | sudo tee /etc/hostname
	I1225 13:26:33.433456 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880612
	
	I1225 13:26:33.433483 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.436392 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.436794 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.436840 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.437004 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.437233 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.437446 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.437595 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.437783 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.438112 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.438134 1483946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880612/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:33.579776 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:33.579813 1483946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:33.579845 1483946 buildroot.go:174] setting up certificates
	I1225 13:26:33.579859 1483946 provision.go:83] configureAuth start
	I1225 13:26:33.579874 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.580151 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:33.582843 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.583233 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.583266 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.583461 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.585844 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.586216 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.586253 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.586454 1483946 provision.go:138] copyHostCerts
	I1225 13:26:33.586532 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:33.586548 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:33.586604 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:33.586692 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:33.586704 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:33.586723 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:33.586771 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:33.586778 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:33.586797 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:33.586837 1483946 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880612 san=[192.168.50.179 192.168.50.179 localhost 127.0.0.1 minikube embed-certs-880612]
	I1225 13:26:33.640840 1483946 provision.go:172] copyRemoteCerts
	I1225 13:26:33.640921 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:33.640951 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.643970 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.644390 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.644419 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.644606 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.644877 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.645065 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.645204 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:33.744907 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:33.769061 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1225 13:26:33.792125 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:26:33.816115 1483946 provision.go:86] duration metric: configureAuth took 236.215977ms
	I1225 13:26:33.816159 1483946 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:33.816373 1483946 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:26:33.816497 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.819654 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.820075 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.820108 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.820287 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.820519 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.820738 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.820873 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.821068 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.821403 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.821428 1483946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:34.159844 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:34.159882 1483946 machine.go:91] provisioned docker machine in 880.882549ms
	I1225 13:26:34.159897 1483946 start.go:300] post-start starting for "embed-certs-880612" (driver="kvm2")
	I1225 13:26:34.159934 1483946 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:34.159964 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.160327 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:34.160358 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.163009 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.163367 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.163400 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.163600 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.163801 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.163943 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.164093 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.261072 1483946 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:34.265655 1483946 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:34.265686 1483946 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:34.265777 1483946 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:34.265881 1483946 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:34.265996 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:34.276013 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:34.299731 1483946 start.go:303] post-start completed in 139.812994ms
	I1225 13:26:34.299783 1483946 fix.go:56] fixHost completed within 20.612345515s
	I1225 13:26:34.299813 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.302711 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.303189 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.303229 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.303363 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.303617 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.303856 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.304000 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.304198 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:34.304522 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:34.304535 1483946 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:34.439399 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510794.384723199
	
	I1225 13:26:34.439426 1483946 fix.go:206] guest clock: 1703510794.384723199
	I1225 13:26:34.439433 1483946 fix.go:219] Guest: 2023-12-25 13:26:34.384723199 +0000 UTC Remote: 2023-12-25 13:26:34.29978875 +0000 UTC m=+107.780041384 (delta=84.934449ms)
	I1225 13:26:34.439468 1483946 fix.go:190] guest clock delta is within tolerance: 84.934449ms
	I1225 13:26:34.439475 1483946 start.go:83] releasing machines lock for "embed-certs-880612", held for 20.75208465s
	I1225 13:26:34.439518 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.439832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:34.442677 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.443002 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.443031 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.443219 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.443827 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.444029 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.444168 1483946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:34.444225 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.444259 1483946 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:34.444295 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.447106 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447136 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447497 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.447533 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.447553 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447571 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447677 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.447719 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.447860 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.447904 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.447982 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.448094 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.448170 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.448219 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.572590 1483946 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:34.578648 1483946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:34.723874 1483946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:34.731423 1483946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:34.731495 1483946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:34.752447 1483946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:34.752478 1483946 start.go:475] detecting cgroup driver to use...
	I1225 13:26:34.752539 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:34.766782 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:34.781457 1483946 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:34.781548 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:34.798097 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:34.813743 1483946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:34.936843 1483946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:35.053397 1483946 docker.go:219] disabling docker service ...
	I1225 13:26:35.053478 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:35.067702 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:35.079670 1483946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:35.213241 1483946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:35.346105 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:35.359207 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:35.377259 1483946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:35.377347 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.388026 1483946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:35.388116 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.398180 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.411736 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.425888 1483946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:35.436586 1483946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:35.446969 1483946 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:35.447028 1483946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:35.461401 1483946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:35.471896 1483946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:35.619404 1483946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:35.825331 1483946 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:35.825410 1483946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:35.830699 1483946 start.go:543] Will wait 60s for crictl version
	I1225 13:26:35.830779 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:26:35.834938 1483946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:35.874595 1483946 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:35.874717 1483946 ssh_runner.go:195] Run: crio --version
	I1225 13:26:35.924227 1483946 ssh_runner.go:195] Run: crio --version
	I1225 13:26:35.982707 1483946 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 13:26:35.984401 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:35.987241 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:35.987669 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:35.987708 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:35.987991 1483946 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:35.992383 1483946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:36.004918 1483946 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:26:36.005000 1483946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:36.053783 1483946 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 13:26:36.053887 1483946 ssh_runner.go:195] Run: which lz4
	I1225 13:26:36.058040 1483946 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:26:36.062730 1483946 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:26:36.062785 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 13:26:35.824151 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting to get IP...
	I1225 13:26:35.825061 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:35.825643 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:35.825741 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:35.825605 1484550 retry.go:31] will retry after 292.143168ms: waiting for machine to come up
	I1225 13:26:36.119220 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.119741 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.119787 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.119666 1484550 retry.go:31] will retry after 250.340048ms: waiting for machine to come up
	I1225 13:26:36.372343 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.372894 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.372932 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.372840 1484550 retry.go:31] will retry after 434.335692ms: waiting for machine to come up
	I1225 13:26:36.808477 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.809037 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.809070 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.808999 1484550 retry.go:31] will retry after 455.184367ms: waiting for machine to come up
	I1225 13:26:37.265791 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.266330 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.266364 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:37.266278 1484550 retry.go:31] will retry after 487.994897ms: waiting for machine to come up
	I1225 13:26:37.756220 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.756745 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.756774 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:37.756699 1484550 retry.go:31] will retry after 817.108831ms: waiting for machine to come up
	I1225 13:26:38.575846 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:38.576271 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:38.576301 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:38.576222 1484550 retry.go:31] will retry after 1.022104679s: waiting for machine to come up
	I1225 13:26:39.600386 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:39.600863 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:39.600901 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:39.600796 1484550 retry.go:31] will retry after 1.318332419s: waiting for machine to come up
	I1225 13:26:35.190721 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:35.190828 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:35.203971 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:35.689934 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:35.690032 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:35.701978 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:36.190256 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:36.190355 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:36.204476 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:36.689969 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:36.690062 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:36.706632 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.189808 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:37.189921 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:37.203895 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.690391 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:37.690499 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:37.704914 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:38.190575 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:38.190694 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:38.208546 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:38.690090 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:38.690260 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:38.701827 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:39.190421 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:39.190549 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:39.202377 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:39.689978 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:39.690104 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:39.706511 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.963805 1483946 crio.go:444] Took 1.905809 seconds to copy over tarball
	I1225 13:26:37.963892 1483946 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:26:40.988182 1483946 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.024256156s)
	I1225 13:26:40.988214 1483946 crio.go:451] Took 3.024377 seconds to extract the tarball
	I1225 13:26:40.988225 1483946 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:26:41.030256 1483946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:41.085117 1483946 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 13:26:41.085147 1483946 cache_images.go:84] Images are preloaded, skipping loading
	I1225 13:26:41.085236 1483946 ssh_runner.go:195] Run: crio config
	I1225 13:26:41.149962 1483946 cni.go:84] Creating CNI manager for ""
	I1225 13:26:41.149993 1483946 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:41.150020 1483946 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:26:41.150044 1483946 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.179 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880612 NodeName:embed-certs-880612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:26:41.150237 1483946 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880612"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:26:41.150312 1483946 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-880612 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-880612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:26:41.150367 1483946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 13:26:41.160557 1483946 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:26:41.160681 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:26:41.170564 1483946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1225 13:26:41.187315 1483946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:26:41.204638 1483946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1225 13:26:41.222789 1483946 ssh_runner.go:195] Run: grep 192.168.50.179	control-plane.minikube.internal$ /etc/hosts
	I1225 13:26:41.226604 1483946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:41.238315 1483946 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612 for IP: 192.168.50.179
	I1225 13:26:41.238363 1483946 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:41.238614 1483946 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:26:41.238665 1483946 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:26:41.238768 1483946 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/client.key
	I1225 13:26:41.238860 1483946 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.key.518daada
	I1225 13:26:41.238925 1483946 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.key
	I1225 13:26:41.239060 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:26:41.239098 1483946 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:26:41.239122 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:26:41.239167 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:26:41.239204 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:26:41.239245 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:26:41.239300 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:41.240235 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:26:41.265422 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:26:41.290398 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:26:41.315296 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:26:41.339984 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:26:41.363071 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:26:41.392035 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:26:41.419673 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:26:41.444242 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:26:41.468314 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:26:41.493811 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:26:41.518255 1483946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:26:41.535605 1483946 ssh_runner.go:195] Run: openssl version
	I1225 13:26:41.541254 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:26:41.551784 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.556610 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.556686 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.562299 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:26:41.572173 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:26:40.921702 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:40.922293 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:40.922335 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:40.922225 1484550 retry.go:31] will retry after 1.835505717s: waiting for machine to come up
	I1225 13:26:42.760187 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:42.760688 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:42.760714 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:42.760625 1484550 retry.go:31] will retry after 1.646709972s: waiting for machine to come up
	I1225 13:26:44.409540 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:44.410023 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:44.410064 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:44.409998 1484550 retry.go:31] will retry after 1.922870398s: waiting for machine to come up
	I1225 13:26:40.190712 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:40.190831 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:40.205624 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:40.690729 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:40.690835 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:40.702671 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.190145 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.190234 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.201991 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.690585 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.690683 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.704041 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.190633 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.190745 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.202086 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.690049 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.690177 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.701556 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.701597 1483118 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:26:42.701611 1483118 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:26:42.701635 1483118 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:26:42.701719 1483118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:42.745733 1483118 cri.go:89] found id: ""
	I1225 13:26:42.745835 1483118 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:26:42.761355 1483118 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:26:42.773734 1483118 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:26:42.773812 1483118 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:42.785213 1483118 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:42.785242 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:42.927378 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:43.715163 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:43.934803 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:44.024379 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:44.106069 1483118 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:26:44.106200 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:44.607243 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:41.582062 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.692062 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.692156 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.698498 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:26:41.709171 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:26:41.719597 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.724562 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.724628 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.730571 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:26:41.740854 1483946 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:26:41.745792 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:26:41.752228 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:26:41.758318 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:26:41.764486 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:26:41.770859 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:26:41.777155 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:26:41.783382 1483946 kubeadm.go:404] StartCluster: {Name:embed-certs-880612 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-880612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:26:41.783493 1483946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:26:41.783557 1483946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:41.827659 1483946 cri.go:89] found id: ""
	I1225 13:26:41.827738 1483946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:26:41.837713 1483946 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:26:41.837740 1483946 kubeadm.go:636] restartCluster start
	I1225 13:26:41.837788 1483946 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:26:41.846668 1483946 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.847773 1483946 kubeconfig.go:92] found "embed-certs-880612" server: "https://192.168.50.179:8443"
	I1225 13:26:41.850105 1483946 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:26:41.859124 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.859196 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.870194 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.359810 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.359906 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.371508 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.860078 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.860167 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.876302 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:43.359657 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:43.359761 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:43.376765 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:43.859950 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:43.860067 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:43.878778 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:44.359355 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:44.359439 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:44.371780 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:44.859294 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:44.859429 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:44.872286 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:45.359315 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:45.359438 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:45.375926 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:45.859453 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:45.859560 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:45.875608 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:46.360180 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:46.360335 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:46.376143 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:46.335832 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:46.336405 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:46.336439 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:46.336342 1484550 retry.go:31] will retry after 2.75487061s: waiting for machine to come up
	I1225 13:26:49.092529 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:49.092962 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:49.092986 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:49.092926 1484550 retry.go:31] will retry after 4.456958281s: waiting for machine to come up
	I1225 13:26:45.106806 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:45.607205 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.106726 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.606675 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.628821 1483118 api_server.go:72] duration metric: took 2.522750929s to wait for apiserver process to appear ...
	I1225 13:26:46.628852 1483118 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:26:46.628878 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:46.629487 1483118 api_server.go:269] stopped: https://192.168.72.232:8443/healthz: Get "https://192.168.72.232:8443/healthz": dial tcp 192.168.72.232:8443: connect: connection refused
	I1225 13:26:47.129325 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:46.860130 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:46.860255 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:46.875574 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:47.360120 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:47.360254 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:47.375470 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:47.860119 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:47.860205 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:47.875015 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:48.359513 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:48.359649 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:48.374270 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:48.859330 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:48.859424 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:48.871789 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:49.359307 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:49.359403 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:49.371339 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:49.859669 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:49.859766 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:49.872882 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.359345 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:50.359455 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:50.370602 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.859148 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:50.859271 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:50.871042 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:51.359459 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:51.359544 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:51.371252 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.824734 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:26:50.824772 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:26:50.824789 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:50.996870 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:50.996923 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:51.129079 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:51.134132 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:51.134169 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:51.629263 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:51.635273 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:51.635305 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:52.129955 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:52.135538 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 200:
	ok
	I1225 13:26:52.144432 1483118 api_server.go:141] control plane version: v1.29.0-rc.2
	I1225 13:26:52.144470 1483118 api_server.go:131] duration metric: took 5.515610636s to wait for apiserver health ...
	I1225 13:26:52.144483 1483118 cni.go:84] Creating CNI manager for ""
	I1225 13:26:52.144491 1483118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:52.146289 1483118 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:26:52.147684 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:26:52.187156 1483118 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:26:52.210022 1483118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:26:52.225137 1483118 system_pods.go:59] 8 kube-system pods found
	I1225 13:26:52.225190 1483118 system_pods.go:61] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:26:52.225200 1483118 system_pods.go:61] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:26:52.225218 1483118 system_pods.go:61] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:26:52.225230 1483118 system_pods.go:61] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:26:52.225239 1483118 system_pods.go:61] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:26:52.225248 1483118 system_pods.go:61] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:26:52.225262 1483118 system_pods.go:61] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:26:52.225272 1483118 system_pods.go:61] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 13:26:52.225288 1483118 system_pods.go:74] duration metric: took 15.241676ms to wait for pod list to return data ...
	I1225 13:26:52.225300 1483118 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:26:52.229429 1483118 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:26:52.229471 1483118 node_conditions.go:123] node cpu capacity is 2
	I1225 13:26:52.229527 1483118 node_conditions.go:105] duration metric: took 4.217644ms to run NodePressure ...
	I1225 13:26:52.229549 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.630596 1483118 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:26:52.635810 1483118 kubeadm.go:787] kubelet initialised
	I1225 13:26:52.635835 1483118 kubeadm.go:788] duration metric: took 5.192822ms waiting for restarted kubelet to initialise ...
	I1225 13:26:52.635844 1483118 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:26:52.645095 1483118 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.652146 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "coredns-76f75df574-pwk9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.652181 1483118 pod_ready.go:81] duration metric: took 7.040805ms waiting for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.652194 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "coredns-76f75df574-pwk9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.652203 1483118 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.658310 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "etcd-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.658347 1483118 pod_ready.go:81] duration metric: took 6.126503ms waiting for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.658359 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "etcd-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.658369 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.663826 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-apiserver-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.663871 1483118 pod_ready.go:81] duration metric: took 5.492644ms waiting for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.663884 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-apiserver-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.663893 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.669098 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.669137 1483118 pod_ready.go:81] duration metric: took 5.230934ms waiting for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.669148 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.669157 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.035736 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-proxy-jbch6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.035782 1483118 pod_ready.go:81] duration metric: took 366.614624ms waiting for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.035796 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-proxy-jbch6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.035806 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.435089 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-scheduler-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.435123 1483118 pod_ready.go:81] duration metric: took 399.30822ms waiting for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.435135 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-scheduler-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.435145 1483118 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.835248 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.835280 1483118 pod_ready.go:81] duration metric: took 400.124904ms waiting for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.835290 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.835299 1483118 pod_ready.go:38] duration metric: took 1.199443126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:26:53.835317 1483118 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:26:53.848912 1483118 ops.go:34] apiserver oom_adj: -16
	I1225 13:26:53.848954 1483118 kubeadm.go:640] restartCluster took 21.184297233s
	I1225 13:26:53.848965 1483118 kubeadm.go:406] StartCluster complete in 21.235197323s
	I1225 13:26:53.849001 1483118 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:53.849140 1483118 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:26:53.851909 1483118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:53.852278 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:26:53.852353 1483118 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:26:53.852461 1483118 addons.go:69] Setting storage-provisioner=true in profile "no-preload-330063"
	I1225 13:26:53.852495 1483118 addons.go:237] Setting addon storage-provisioner=true in "no-preload-330063"
	W1225 13:26:53.852507 1483118 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:26:53.852514 1483118 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:26:53.852555 1483118 addons.go:69] Setting default-storageclass=true in profile "no-preload-330063"
	I1225 13:26:53.852579 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.852607 1483118 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-330063"
	I1225 13:26:53.852871 1483118 addons.go:69] Setting metrics-server=true in profile "no-preload-330063"
	I1225 13:26:53.852895 1483118 addons.go:237] Setting addon metrics-server=true in "no-preload-330063"
	W1225 13:26:53.852904 1483118 addons.go:246] addon metrics-server should already be in state true
	I1225 13:26:53.852948 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.852985 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.852985 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.853012 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.853012 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.853315 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.853361 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.858023 1483118 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-330063" context rescaled to 1 replicas
	I1225 13:26:53.858077 1483118 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:26:53.861368 1483118 out.go:177] * Verifying Kubernetes components...
	I1225 13:26:53.862819 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:26:53.870209 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
	I1225 13:26:53.870486 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I1225 13:26:53.870693 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.870807 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.871066 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I1225 13:26:53.871329 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.871341 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.871426 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.871433 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.871742 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.871770 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.872271 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.872308 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.872511 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.872896 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.872923 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.873167 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.873180 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.873549 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.873693 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.878043 1483118 addons.go:237] Setting addon default-storageclass=true in "no-preload-330063"
	W1225 13:26:53.878077 1483118 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:26:53.878117 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.878613 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.878657 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.891971 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I1225 13:26:53.892418 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.893067 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.893092 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.893461 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.893634 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.895563 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.897916 1483118 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:26:53.896007 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I1225 13:26:53.899799 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:26:53.899823 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:26:53.899858 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.900294 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.900987 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.901006 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.901451 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.901677 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.901677 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I1225 13:26:53.902344 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.902956 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.902981 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.903419 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.903917 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.903986 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.904022 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.904445 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.904452 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.904471 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.904615 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.904785 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.906582 1483118 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:53.551972 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.552449 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Found IP for machine: 192.168.61.39
	I1225 13:26:53.552500 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has current primary IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.552515 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Reserving static IP address...
	I1225 13:26:53.552918 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-344803", mac: "52:54:00:80:85:71", ip: "192.168.61.39"} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.552967 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | skip adding static IP to network mk-default-k8s-diff-port-344803 - found existing host DHCP lease matching {name: "default-k8s-diff-port-344803", mac: "52:54:00:80:85:71", ip: "192.168.61.39"}
	I1225 13:26:53.552990 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Reserved static IP address: 192.168.61.39
	I1225 13:26:53.553003 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for SSH to be available...
	I1225 13:26:53.553041 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Getting to WaitForSSH function...
	I1225 13:26:53.555272 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.555619 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.555654 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.555753 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Using SSH client type: external
	I1225 13:26:53.555785 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa (-rw-------)
	I1225 13:26:53.555828 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:53.555852 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | About to run SSH command:
	I1225 13:26:53.555872 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | exit 0
	I1225 13:26:53.642574 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:53.643094 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetConfigRaw
	I1225 13:26:53.643946 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:53.646842 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.647308 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.647351 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.647580 1484104 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/config.json ...
	I1225 13:26:53.647806 1484104 machine.go:88] provisioning docker machine ...
	I1225 13:26:53.647827 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:53.648054 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.648255 1484104 buildroot.go:166] provisioning hostname "default-k8s-diff-port-344803"
	I1225 13:26:53.648274 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.648485 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.650935 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.651291 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.651327 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.651479 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:53.651718 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.651887 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.652028 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:53.652213 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:53.652587 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:53.652605 1484104 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-344803 && echo "default-k8s-diff-port-344803" | sudo tee /etc/hostname
	I1225 13:26:53.782284 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-344803
	
	I1225 13:26:53.782315 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.785326 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.785631 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.785668 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.785876 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:53.786149 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.786374 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.786600 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:53.786806 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:53.787202 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:53.787222 1484104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-344803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-344803/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-344803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:53.916809 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:53.916844 1484104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:53.916870 1484104 buildroot.go:174] setting up certificates
	I1225 13:26:53.916882 1484104 provision.go:83] configureAuth start
	I1225 13:26:53.916900 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.917233 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:53.920048 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.920377 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.920402 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.920538 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.923177 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.923404 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.923437 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.923584 1484104 provision.go:138] copyHostCerts
	I1225 13:26:53.923666 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:53.923686 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:53.923763 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:53.923934 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:53.923947 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:53.923978 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:53.924078 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:53.924088 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:53.924115 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:53.924207 1484104 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-344803 san=[192.168.61.39 192.168.61.39 localhost 127.0.0.1 minikube default-k8s-diff-port-344803]
	I1225 13:26:54.014673 1484104 provision.go:172] copyRemoteCerts
	I1225 13:26:54.014739 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:54.014772 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.018361 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.018849 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.018924 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.019089 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.019351 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.019559 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.019949 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.120711 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:54.155907 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1225 13:26:54.192829 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 13:26:54.227819 1484104 provision.go:86] duration metric: configureAuth took 310.912829ms
	I1225 13:26:54.227853 1484104 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:54.228119 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:26:54.228236 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.232535 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.232580 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.232628 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.232945 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.233215 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.233422 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.233608 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.233801 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:54.234295 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:54.234322 1484104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:54.638656 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:54.638772 1484104 machine.go:91] provisioned docker machine in 990.950916ms
	I1225 13:26:54.638798 1484104 start.go:300] post-start starting for "default-k8s-diff-port-344803" (driver="kvm2")
	I1225 13:26:54.638821 1484104 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:54.638883 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.639341 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:54.639383 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.643369 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.643810 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.643863 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.644140 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.644444 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.644624 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.644774 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.740189 1484104 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:54.745972 1484104 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:54.746009 1484104 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:54.746104 1484104 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:54.746229 1484104 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:54.746362 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:54.758199 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:54.794013 1484104 start.go:303] post-start completed in 155.186268ms
	I1225 13:26:54.794048 1484104 fix.go:56] fixHost completed within 20.354368879s
	I1225 13:26:54.794077 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.797620 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.798092 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.798129 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.798423 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.798692 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.798900 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.799067 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.799293 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:54.799807 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:54.799829 1484104 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:54.933026 1482618 start.go:369] acquired machines lock for "old-k8s-version-198979" in 59.553202424s
	I1225 13:26:54.933097 1482618 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:54.933105 1482618 fix.go:54] fixHost starting: 
	I1225 13:26:54.933577 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:54.933620 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:54.956206 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I1225 13:26:54.956801 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:54.958396 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:26:54.958425 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:54.958887 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:54.959164 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:26:54.959384 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:26:54.961270 1482618 fix.go:102] recreateIfNeeded on old-k8s-version-198979: state=Stopped err=<nil>
	I1225 13:26:54.961305 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	W1225 13:26:54.961494 1482618 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:54.963775 1482618 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-198979" ...
	I1225 13:26:53.904908 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.908114 1483118 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:26:53.908130 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:26:53.908147 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.908370 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:53.912254 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.912861 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.912885 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.913096 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.913324 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.913510 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.913629 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:53.959638 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
	I1225 13:26:53.960190 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.960890 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.960913 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.961320 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.961603 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.963927 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.964240 1483118 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:26:53.964262 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:26:53.964282 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.967614 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.968092 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.968155 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.968471 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.968679 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.968879 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.969040 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:54.064639 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:26:54.064674 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:26:54.093609 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:26:54.147415 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:26:54.147449 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:26:54.148976 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:26:54.160381 1483118 node_ready.go:35] waiting up to 6m0s for node "no-preload-330063" to be "Ready" ...
	I1225 13:26:54.160490 1483118 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:26:54.202209 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:26:54.202242 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:26:54.276251 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:26:54.965270 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Start
	I1225 13:26:54.965680 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring networks are active...
	I1225 13:26:54.966477 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring network default is active
	I1225 13:26:54.966919 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring network mk-old-k8s-version-198979 is active
	I1225 13:26:54.967420 1482618 main.go:141] libmachine: (old-k8s-version-198979) Getting domain xml...
	I1225 13:26:54.968585 1482618 main.go:141] libmachine: (old-k8s-version-198979) Creating domain...
	I1225 13:26:55.590940 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.497275379s)
	I1225 13:26:55.591005 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591020 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.591108 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.442107411s)
	I1225 13:26:55.591127 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591136 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.591247 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.314957717s)
	I1225 13:26:55.591268 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591280 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.595765 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.595838 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.595847 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.595859 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.595867 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596016 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596049 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596058 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596067 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.596075 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596177 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596218 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596226 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596236 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.596244 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596485 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596515 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596929 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596972 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596979 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596990 1483118 addons.go:473] Verifying addon metrics-server=true in "no-preload-330063"
	I1225 13:26:55.597032 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.597067 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.597076 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.610755 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.610788 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.611238 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.611264 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.613767 1483118 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1225 13:26:51.859989 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:51.860081 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:51.871647 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:51.871684 1483946 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:26:51.871709 1483946 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:26:51.871725 1483946 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:26:51.871817 1483946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:51.919587 1483946 cri.go:89] found id: ""
	I1225 13:26:51.919706 1483946 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:26:51.935341 1483946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:26:51.944522 1483946 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:26:51.944588 1483946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:51.954607 1483946 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:51.954637 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.092831 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.921485 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.161902 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.270786 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.340226 1483946 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:26:53.340331 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:53.841309 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:54.341486 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:54.841104 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.341409 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.841238 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.867371 1483946 api_server.go:72] duration metric: took 2.52714535s to wait for apiserver process to appear ...
	I1225 13:26:55.867406 1483946 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:26:55.867434 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:26:55.867970 1483946 api_server.go:269] stopped: https://192.168.50.179:8443/healthz: Get "https://192.168.50.179:8443/healthz": dial tcp 192.168.50.179:8443: connect: connection refused
	I1225 13:26:56.368335 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:26:54.932810 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510814.876127642
	
	I1225 13:26:54.932838 1484104 fix.go:206] guest clock: 1703510814.876127642
	I1225 13:26:54.932848 1484104 fix.go:219] Guest: 2023-12-25 13:26:54.876127642 +0000 UTC Remote: 2023-12-25 13:26:54.794053361 +0000 UTC m=+104.977714576 (delta=82.074281ms)
	I1225 13:26:54.932878 1484104 fix.go:190] guest clock delta is within tolerance: 82.074281ms
	I1225 13:26:54.932885 1484104 start.go:83] releasing machines lock for "default-k8s-diff-port-344803", held for 20.493256775s
	I1225 13:26:54.932920 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.933380 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:54.936626 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.937209 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.937262 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.937534 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938366 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938583 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938676 1484104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:54.938722 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.938826 1484104 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:54.938854 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.942392 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.942792 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.942831 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.943292 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.943487 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.943635 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.943764 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.943922 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.944870 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.945020 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.945066 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.945318 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.945498 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.945743 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:55.069674 1484104 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:55.078333 1484104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:55.247706 1484104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:55.256782 1484104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:55.256885 1484104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:55.278269 1484104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:55.278303 1484104 start.go:475] detecting cgroup driver to use...
	I1225 13:26:55.278383 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:55.302307 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:55.322161 1484104 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:55.322345 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:55.342241 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:55.361128 1484104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:55.547880 1484104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:55.693711 1484104 docker.go:219] disabling docker service ...
	I1225 13:26:55.693804 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:55.708058 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:55.721136 1484104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:55.890044 1484104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:56.042549 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:56.061359 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:56.086075 1484104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:56.086169 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.100059 1484104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:56.100162 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.113858 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.127589 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.140964 1484104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:56.155180 1484104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:56.167585 1484104 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:56.167716 1484104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:56.186467 1484104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:56.200044 1484104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:56.339507 1484104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:56.563294 1484104 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:56.563385 1484104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:56.570381 1484104 start.go:543] Will wait 60s for crictl version
	I1225 13:26:56.570477 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:26:56.575675 1484104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:56.617219 1484104 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:56.617322 1484104 ssh_runner.go:195] Run: crio --version
	I1225 13:26:56.679138 1484104 ssh_runner.go:195] Run: crio --version
	I1225 13:26:56.751125 1484104 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 13:26:56.752677 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:56.756612 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:56.757108 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:56.757142 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:56.757502 1484104 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:56.763739 1484104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:56.781952 1484104 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:26:56.782029 1484104 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:56.840852 1484104 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 13:26:56.840939 1484104 ssh_runner.go:195] Run: which lz4
	I1225 13:26:56.845412 1484104 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:26:56.850135 1484104 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:26:56.850181 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 13:26:58.731034 1484104 crio.go:444] Took 1.885656 seconds to copy over tarball
	I1225 13:26:58.731138 1484104 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:26:55.615056 1483118 addons.go:508] enable addons completed in 1.762702944s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1225 13:26:56.169115 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:58.665700 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:56.860066 1482618 main.go:141] libmachine: (old-k8s-version-198979) Waiting to get IP...
	I1225 13:26:56.860987 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:56.861644 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:56.861765 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:56.861626 1484760 retry.go:31] will retry after 198.102922ms: waiting for machine to come up
	I1225 13:26:57.061281 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.062001 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.062029 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.061907 1484760 retry.go:31] will retry after 299.469436ms: waiting for machine to come up
	I1225 13:26:57.362874 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.363385 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.363441 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.363363 1484760 retry.go:31] will retry after 460.796393ms: waiting for machine to come up
	I1225 13:26:57.826330 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.827065 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.827098 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.827021 1484760 retry.go:31] will retry after 397.690798ms: waiting for machine to come up
	I1225 13:26:58.226942 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:58.227490 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:58.227528 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:58.227437 1484760 retry.go:31] will retry after 731.798943ms: waiting for machine to come up
	I1225 13:26:58.960490 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:58.960969 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:58.961000 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:58.960930 1484760 retry.go:31] will retry after 577.614149ms: waiting for machine to come up
	I1225 13:26:59.540871 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:59.541581 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:59.541607 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:59.541494 1484760 retry.go:31] will retry after 1.177902051s: waiting for machine to come up
	I1225 13:27:00.799310 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.799355 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:00.799376 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:00.905272 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.905311 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:00.905330 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:00.922285 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.922324 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:01.367590 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:01.374093 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:01.374155 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.440592 1484104 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.709419632s)
	I1225 13:27:02.440624 1484104 crio.go:451] Took 3.709555 seconds to extract the tarball
	I1225 13:27:02.440636 1484104 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:27:02.504136 1484104 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:02.613720 1484104 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 13:27:02.613752 1484104 cache_images.go:84] Images are preloaded, skipping loading
	I1225 13:27:02.613839 1484104 ssh_runner.go:195] Run: crio config
	I1225 13:27:02.685414 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:27:02.685436 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:02.685459 1484104 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:27:02.685477 1484104 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.39 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-344803 NodeName:default-k8s-diff-port-344803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:27:02.685627 1484104 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.39
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-344803"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:27:02.685710 1484104 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-344803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1225 13:27:02.685778 1484104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 13:27:02.696327 1484104 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:27:02.696420 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:27:02.707369 1484104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1225 13:27:02.728181 1484104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:27:02.748934 1484104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1225 13:27:02.770783 1484104 ssh_runner.go:195] Run: grep 192.168.61.39	control-plane.minikube.internal$ /etc/hosts
	I1225 13:27:02.775946 1484104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:02.790540 1484104 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803 for IP: 192.168.61.39
	I1225 13:27:02.790590 1484104 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:02.790792 1484104 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:27:02.790862 1484104 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:27:02.790961 1484104 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.key
	I1225 13:27:02.859647 1484104 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.key.daee23f3
	I1225 13:27:02.859773 1484104 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.key
	I1225 13:27:02.859934 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:27:02.859993 1484104 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:27:02.860010 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:27:02.860037 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:27:02.860061 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:27:02.860082 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:27:02.860121 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:02.860871 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:27:02.889354 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1225 13:27:02.916983 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:27:02.943348 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:27:02.969940 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:27:02.996224 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:27:03.021662 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:27:03.052589 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:27:03.080437 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:27:03.107973 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:27:03.134921 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:27:03.161948 1484104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:27:03.184606 1484104 ssh_runner.go:195] Run: openssl version
	I1225 13:27:03.192305 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:27:03.204868 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.209793 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.209895 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.216568 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:27:03.229131 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:27:03.241634 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.247328 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.247397 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.253730 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:27:03.267063 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:27:03.281957 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.288393 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.288481 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.295335 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:27:03.307900 1484104 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:27:03.313207 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:27:03.319949 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:27:03.327223 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:27:03.333927 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:27:03.341434 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:27:03.349298 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:27:03.356303 1484104 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:27:03.356409 1484104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:27:03.356463 1484104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:03.407914 1484104 cri.go:89] found id: ""
	I1225 13:27:03.407991 1484104 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:27:03.418903 1484104 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:27:03.418928 1484104 kubeadm.go:636] restartCluster start
	I1225 13:27:03.418981 1484104 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:27:03.429758 1484104 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:03.431242 1484104 kubeconfig.go:92] found "default-k8s-diff-port-344803" server: "https://192.168.61.39:8444"
	I1225 13:27:03.433847 1484104 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:27:03.443564 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:03.443648 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:03.457188 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:03.943692 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:03.943806 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:03.956490 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:04.443680 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:04.443781 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:04.464817 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:00.671397 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:27:01.665347 1483118 node_ready.go:49] node "no-preload-330063" has status "Ready":"True"
	I1225 13:27:01.665383 1483118 node_ready.go:38] duration metric: took 7.504959726s waiting for node "no-preload-330063" to be "Ready" ...
	I1225 13:27:01.665398 1483118 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:01.675515 1483118 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.688377 1483118 pod_ready.go:92] pod "coredns-76f75df574-pwk9h" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:01.688467 1483118 pod_ready.go:81] duration metric: took 12.819049ms waiting for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.688492 1483118 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:03.697007 1483118 pod_ready.go:102] pod "etcd-no-preload-330063" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:04.379595 1483118 pod_ready.go:92] pod "etcd-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.379628 1483118 pod_ready.go:81] duration metric: took 2.691119222s waiting for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.379643 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.393427 1483118 pod_ready.go:92] pod "kube-apiserver-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.393459 1483118 pod_ready.go:81] duration metric: took 13.806505ms waiting for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.393473 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.454291 1483118 pod_ready.go:92] pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.454387 1483118 pod_ready.go:81] duration metric: took 60.903507ms waiting for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.454417 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.525436 1483118 pod_ready.go:92] pod "kube-proxy-jbch6" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.525471 1483118 pod_ready.go:81] duration metric: took 71.040817ms waiting for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.525486 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.546670 1483118 pod_ready.go:92] pod "kube-scheduler-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.546709 1483118 pod_ready.go:81] duration metric: took 21.213348ms waiting for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.546726 1483118 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.868308 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:01.913335 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:01.913393 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.367660 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:02.375382 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:02.375424 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.867590 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:02.873638 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:02.873680 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:03.368014 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:03.377785 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:03.377827 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:03.867933 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:03.873979 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:03.874013 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:04.367576 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:04.377835 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:04.377884 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:04.868444 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:04.879138 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:04.879187 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:05.367519 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:05.377570 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 200:
	ok
	I1225 13:27:05.388572 1483946 api_server.go:141] control plane version: v1.28.4
	I1225 13:27:05.388605 1483946 api_server.go:131] duration metric: took 9.521192442s to wait for apiserver health ...
	I1225 13:27:05.388615 1483946 cni.go:84] Creating CNI manager for ""
	I1225 13:27:05.388625 1483946 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:05.390592 1483946 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:00.720918 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:00.721430 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:00.721457 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:00.721380 1484760 retry.go:31] will retry after 931.125211ms: waiting for machine to come up
	I1225 13:27:01.654661 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:01.655341 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:01.655367 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:01.655263 1484760 retry.go:31] will retry after 1.333090932s: waiting for machine to come up
	I1225 13:27:02.991018 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:02.991520 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:02.991555 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:02.991468 1484760 retry.go:31] will retry after 2.006684909s: waiting for machine to come up
	I1225 13:27:05.000424 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:05.000972 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:05.001023 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:05.000908 1484760 retry.go:31] will retry after 2.72499386s: waiting for machine to come up
	I1225 13:27:05.391952 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:05.406622 1483946 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:05.429599 1483946 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:05.441614 1483946 system_pods.go:59] 9 kube-system pods found
	I1225 13:27:05.441681 1483946 system_pods.go:61] "coredns-5dd5756b68-4jqz4" [026524a6-1f73-4644-8a80-b276326178b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:05.441698 1483946 system_pods.go:61] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:05.441710 1483946 system_pods.go:61] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:27:05.441721 1483946 system_pods.go:61] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:27:05.441732 1483946 system_pods.go:61] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:27:05.441746 1483946 system_pods.go:61] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:27:05.441758 1483946 system_pods.go:61] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:27:05.441773 1483946 system_pods.go:61] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:27:05.441790 1483946 system_pods.go:61] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:27:05.441812 1483946 system_pods.go:74] duration metric: took 12.174684ms to wait for pod list to return data ...
	I1225 13:27:05.441824 1483946 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:05.447018 1483946 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:05.447064 1483946 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:05.447079 1483946 node_conditions.go:105] duration metric: took 5.247366ms to run NodePressure ...
	I1225 13:27:05.447106 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:05.767972 1483946 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:05.774281 1483946 kubeadm.go:787] kubelet initialised
	I1225 13:27:05.774307 1483946 kubeadm.go:788] duration metric: took 6.300121ms waiting for restarted kubelet to initialise ...
	I1225 13:27:05.774316 1483946 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:05.781474 1483946 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.789698 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.789732 1483946 pod_ready.go:81] duration metric: took 8.22748ms waiting for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.789746 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.789758 1483946 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.798517 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.798584 1483946 pod_ready.go:81] duration metric: took 8.811967ms waiting for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.798601 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.798612 1483946 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.804958 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "etcd-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.804998 1483946 pod_ready.go:81] duration metric: took 6.356394ms waiting for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.805018 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "etcd-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.805028 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.834502 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.834549 1483946 pod_ready.go:81] duration metric: took 29.510044ms waiting for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.834561 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.834571 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:06.234676 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.234728 1483946 pod_ready.go:81] duration metric: took 400.145957ms waiting for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:06.234742 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.234752 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:06.634745 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-proxy-677d7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.634785 1483946 pod_ready.go:81] duration metric: took 400.019189ms waiting for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:06.634798 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-proxy-677d7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.634807 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:07.034762 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.034793 1483946 pod_ready.go:81] duration metric: took 399.977148ms waiting for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:07.034803 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.034810 1483946 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:07.433932 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.433969 1483946 pod_ready.go:81] duration metric: took 399.14889ms waiting for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:07.433982 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.433992 1483946 pod_ready.go:38] duration metric: took 1.659666883s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:07.434016 1483946 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:27:07.448377 1483946 ops.go:34] apiserver oom_adj: -16
	I1225 13:27:07.448405 1483946 kubeadm.go:640] restartCluster took 25.610658268s
	I1225 13:27:07.448415 1483946 kubeadm.go:406] StartCluster complete in 25.665045171s
	I1225 13:27:07.448443 1483946 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:07.448530 1483946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:27:07.451369 1483946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:07.453102 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:27:07.453244 1483946 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:27:07.453332 1483946 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880612"
	I1225 13:27:07.453351 1483946 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-880612"
	W1225 13:27:07.453363 1483946 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:27:07.453432 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.453450 1483946 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:27:07.453516 1483946 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880612"
	I1225 13:27:07.453536 1483946 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880612"
	I1225 13:27:07.453860 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.453870 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.453902 1483946 addons.go:69] Setting metrics-server=true in profile "embed-certs-880612"
	I1225 13:27:07.453917 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.453925 1483946 addons.go:237] Setting addon metrics-server=true in "embed-certs-880612"
	W1225 13:27:07.454160 1483946 addons.go:246] addon metrics-server should already be in state true
	I1225 13:27:07.454211 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.453903 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.454601 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.454669 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.476508 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I1225 13:27:07.476720 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I1225 13:27:07.477202 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.477210 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.477794 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.477815 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.477957 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.477971 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.478407 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.478478 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.479041 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.479083 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.480350 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.483762 1483946 addons.go:237] Setting addon default-storageclass=true in "embed-certs-880612"
	W1225 13:27:07.483783 1483946 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:27:07.483816 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.484249 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.484285 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.489369 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I1225 13:27:07.489817 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.490332 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.490354 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.491339 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.494037 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.494083 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.501003 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I1225 13:27:07.501737 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.502399 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.502422 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.502882 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.503092 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.505387 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.507725 1483946 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:07.509099 1483946 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:07.509121 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:27:07.509153 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.513153 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.513923 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.513957 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.514226 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.514426 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.514610 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.515190 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.516933 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38615
	I1225 13:27:07.517681 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.518194 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.518220 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.518784 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1225 13:27:07.519309 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.519400 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.519930 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.519956 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.520525 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.520573 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.520819 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.521050 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.523074 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.525265 1483946 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:27:07.526542 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:27:07.526569 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:27:07.526598 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.530316 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.530846 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.530883 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.531223 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.531571 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.531832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.532070 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.544917 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I1225 13:27:07.545482 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.546037 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.546059 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.546492 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.546850 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.548902 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.549177 1483946 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:07.549196 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:27:07.549218 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.553036 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.553541 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.553572 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.553784 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.554642 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.554893 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.555581 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.676244 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:07.704310 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:07.718012 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:27:07.718043 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:27:07.779041 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:27:07.779073 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:27:07.786154 1483946 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:27:07.812338 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:07.812373 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:27:07.837795 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:07.974099 1483946 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-880612" context rescaled to 1 replicas
	I1225 13:27:07.974158 1483946 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:27:07.977116 1483946 out.go:177] * Verifying Kubernetes components...
	I1225 13:27:07.978618 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:27:09.163988 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.459630406s)
	I1225 13:27:09.164059 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164073 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164091 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.487803106s)
	I1225 13:27:09.164129 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164149 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164617 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.164624 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.164629 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.164639 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.164641 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.164651 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164653 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164661 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164666 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164622 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165025 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165095 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.165121 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.165172 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.165186 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.188483 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.188510 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.188847 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.188898 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.188906 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.193684 1483946 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.215023208s)
	I1225 13:27:09.193736 1483946 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880612" to be "Ready" ...
	I1225 13:27:09.193789 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.355953438s)
	I1225 13:27:09.193825 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.193842 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.194176 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.194192 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.194208 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.194219 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.195998 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.196000 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.196033 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.196044 1483946 addons.go:473] Verifying addon metrics-server=true in "embed-certs-880612"
	I1225 13:27:09.198211 1483946 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:27:04.943819 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:04.943958 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:04.960056 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:05.443699 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:05.443795 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:05.461083 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:05.943713 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:05.943821 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:05.960712 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.444221 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:06.444305 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:06.458894 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.944546 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:06.944630 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:06.958754 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:07.444332 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:07.444462 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:07.491468 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:07.943982 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:07.944135 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:07.960697 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:08.444285 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:08.444408 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:08.461209 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:08.943720 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:08.943866 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:08.959990 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:09.444604 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:09.444727 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:09.463020 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.556605 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:08.560748 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:07.728505 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:07.728994 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:07.729023 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:07.728936 1484760 retry.go:31] will retry after 2.39810797s: waiting for machine to come up
	I1225 13:27:10.129402 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:10.129925 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:10.129960 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:10.129860 1484760 retry.go:31] will retry after 4.278491095s: waiting for machine to come up
	I1225 13:27:09.199531 1483946 addons.go:508] enable addons completed in 1.746293071s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:27:11.199503 1483946 node_ready.go:49] node "embed-certs-880612" has status "Ready":"True"
	I1225 13:27:11.199529 1483946 node_ready.go:38] duration metric: took 2.005779632s waiting for node "embed-certs-880612" to be "Ready" ...
	I1225 13:27:11.199541 1483946 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:11.207447 1483946 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:09.943841 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:09.943948 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:09.960478 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:10.444037 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:10.444309 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:10.463480 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:10.943760 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:10.943886 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:10.960191 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:11.444602 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:11.444702 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:11.458181 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:11.943674 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:11.943783 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:11.956418 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:12.443719 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:12.443835 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:12.456707 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:12.944332 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:12.944434 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:12.957217 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:13.443965 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:13.444076 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:13.455968 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:13.456008 1484104 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:27:13.456051 1484104 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:27:13.456067 1484104 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:27:13.456145 1484104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:13.497063 1484104 cri.go:89] found id: ""
	I1225 13:27:13.497135 1484104 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:27:13.513279 1484104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:27:13.522816 1484104 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:27:13.522885 1484104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:13.532580 1484104 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:13.532612 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:13.668876 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:14.848056 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.179140695s)
	I1225 13:27:14.848090 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:11.072420 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:13.555685 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:14.413456 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:14.414013 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:14.414043 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:14.413960 1484760 retry.go:31] will retry after 4.470102249s: waiting for machine to come up
	I1225 13:27:11.714710 1483946 pod_ready.go:92] pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.714747 1483946 pod_ready.go:81] duration metric: took 507.263948ms waiting for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.714760 1483946 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.720448 1483946 pod_ready.go:92] pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.720472 1483946 pod_ready.go:81] duration metric: took 5.705367ms waiting for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.720481 1483946 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.725691 1483946 pod_ready.go:92] pod "etcd-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.725717 1483946 pod_ready.go:81] duration metric: took 5.229718ms waiting for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.725725 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.238949 1483946 pod_ready.go:92] pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.238979 1483946 pod_ready.go:81] duration metric: took 1.513246575s waiting for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.238992 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.244957 1483946 pod_ready.go:92] pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.244980 1483946 pod_ready.go:81] duration metric: took 5.981457ms waiting for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.244991 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.609255 1483946 pod_ready.go:92] pod "kube-proxy-677d7" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.609282 1483946 pod_ready.go:81] duration metric: took 364.285426ms waiting for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.609292 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.621505 1483946 pod_ready.go:92] pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:15.621540 1483946 pod_ready.go:81] duration metric: took 2.012239726s waiting for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.621553 1483946 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.047153 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:15.142405 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:15.237295 1484104 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:27:15.237406 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:15.737788 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:16.238003 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:16.738328 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:17.238494 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:17.738177 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:18.237676 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:18.259279 1484104 api_server.go:72] duration metric: took 3.021983877s to wait for apiserver process to appear ...
	I1225 13:27:18.259305 1484104 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:27:18.259331 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:15.555810 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:18.056361 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:18.888547 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.889138 1482618 main.go:141] libmachine: (old-k8s-version-198979) Found IP for machine: 192.168.39.186
	I1225 13:27:18.889167 1482618 main.go:141] libmachine: (old-k8s-version-198979) Reserving static IP address...
	I1225 13:27:18.889183 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has current primary IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.889631 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "old-k8s-version-198979", mac: "52:54:00:a1:03:69", ip: "192.168.39.186"} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.889672 1482618 main.go:141] libmachine: (old-k8s-version-198979) Reserved static IP address: 192.168.39.186
	I1225 13:27:18.889702 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | skip adding static IP to network mk-old-k8s-version-198979 - found existing host DHCP lease matching {name: "old-k8s-version-198979", mac: "52:54:00:a1:03:69", ip: "192.168.39.186"}
	I1225 13:27:18.889724 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Getting to WaitForSSH function...
	I1225 13:27:18.889741 1482618 main.go:141] libmachine: (old-k8s-version-198979) Waiting for SSH to be available...
	I1225 13:27:18.892133 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.892475 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.892509 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.892626 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Using SSH client type: external
	I1225 13:27:18.892658 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa (-rw-------)
	I1225 13:27:18.892688 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:27:18.892703 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | About to run SSH command:
	I1225 13:27:18.892722 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | exit 0
	I1225 13:27:18.991797 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | SSH cmd err, output: <nil>: 
	I1225 13:27:18.992203 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetConfigRaw
	I1225 13:27:18.992943 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:18.996016 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.996344 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.996416 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.996762 1482618 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/config.json ...
	I1225 13:27:18.996990 1482618 machine.go:88] provisioning docker machine ...
	I1225 13:27:18.997007 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:18.997254 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:18.997454 1482618 buildroot.go:166] provisioning hostname "old-k8s-version-198979"
	I1225 13:27:18.997483 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:18.997670 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.000725 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.001114 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.001144 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.001332 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.001504 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.001686 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.001836 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.002039 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.002592 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.002614 1482618 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-198979 && echo "old-k8s-version-198979" | sudo tee /etc/hostname
	I1225 13:27:19.148260 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-198979
	
	I1225 13:27:19.148291 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.151692 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.152160 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.152196 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.152350 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.152566 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.152743 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.152941 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.153133 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.153647 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.153678 1482618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-198979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-198979/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-198979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:27:19.294565 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:27:19.294606 1482618 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:27:19.294635 1482618 buildroot.go:174] setting up certificates
	I1225 13:27:19.294649 1482618 provision.go:83] configureAuth start
	I1225 13:27:19.294663 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:19.295039 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:19.298511 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.298933 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.298971 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.299137 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.302045 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.302486 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.302520 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.302682 1482618 provision.go:138] copyHostCerts
	I1225 13:27:19.302777 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:27:19.302806 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:27:19.302869 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:27:19.302994 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:27:19.303012 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:27:19.303042 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:27:19.303103 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:27:19.303113 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:27:19.303131 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:27:19.303177 1482618 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-198979 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube old-k8s-version-198979]
	I1225 13:27:19.444049 1482618 provision.go:172] copyRemoteCerts
	I1225 13:27:19.444142 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:27:19.444180 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.447754 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.448141 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.448174 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.448358 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.448593 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.448818 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.448994 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:19.545298 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:27:19.576678 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:27:19.604520 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1225 13:27:19.631640 1482618 provision.go:86] duration metric: configureAuth took 336.975454ms
	I1225 13:27:19.631674 1482618 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:27:19.631899 1482618 config.go:182] Loaded profile config "old-k8s-version-198979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1225 13:27:19.632012 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.635618 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.636130 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.636166 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.636644 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.636903 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.637088 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.637315 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.637511 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.638005 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.638040 1482618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:27:19.990807 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:27:19.990844 1482618 machine.go:91] provisioned docker machine in 993.840927ms
	I1225 13:27:19.990857 1482618 start.go:300] post-start starting for "old-k8s-version-198979" (driver="kvm2")
	I1225 13:27:19.990870 1482618 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:27:19.990908 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:19.991349 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:27:19.991388 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.994622 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.994980 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.995015 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.995147 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.995402 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.995574 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.995713 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.089652 1482618 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:27:20.094575 1482618 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:27:20.094611 1482618 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:27:20.094716 1482618 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:27:20.094856 1482618 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:27:20.095010 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:27:20.105582 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:20.133802 1482618 start.go:303] post-start completed in 142.928836ms
	I1225 13:27:20.133830 1482618 fix.go:56] fixHost completed within 25.200724583s
	I1225 13:27:20.133860 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.137215 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.137635 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.137670 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.137839 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.138081 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.138322 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.138518 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.138732 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:20.139194 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:20.139228 1482618 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:27:20.268572 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510840.203941272
	
	I1225 13:27:20.268602 1482618 fix.go:206] guest clock: 1703510840.203941272
	I1225 13:27:20.268613 1482618 fix.go:219] Guest: 2023-12-25 13:27:20.203941272 +0000 UTC Remote: 2023-12-25 13:27:20.133835417 +0000 UTC m=+384.781536006 (delta=70.105855ms)
	I1225 13:27:20.268641 1482618 fix.go:190] guest clock delta is within tolerance: 70.105855ms
	I1225 13:27:20.268651 1482618 start.go:83] releasing machines lock for "old-k8s-version-198979", held for 25.335582747s
	I1225 13:27:20.268683 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.268981 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:20.272181 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.272626 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.272666 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.272948 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273612 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273851 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273925 1482618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:27:20.273990 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.274108 1482618 ssh_runner.go:195] Run: cat /version.json
	I1225 13:27:20.274133 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.277090 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277381 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277568 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.277608 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277839 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.278041 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.278066 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.278085 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.278284 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.278293 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.278500 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.278516 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.278691 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.278852 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.395858 1482618 ssh_runner.go:195] Run: systemctl --version
	I1225 13:27:20.403417 1482618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:27:17.629846 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:19.635250 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:20.559485 1482618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:27:20.566356 1482618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:27:20.566487 1482618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:27:20.584531 1482618 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:27:20.584565 1482618 start.go:475] detecting cgroup driver to use...
	I1225 13:27:20.584648 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:27:20.599889 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:27:20.613197 1482618 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:27:20.613278 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:27:20.626972 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:27:20.640990 1482618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:27:20.752941 1482618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:27:20.886880 1482618 docker.go:219] disabling docker service ...
	I1225 13:27:20.886971 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:27:20.903143 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:27:20.919083 1482618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:27:21.042116 1482618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:27:21.171997 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:27:21.185237 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:27:21.204711 1482618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1225 13:27:21.204787 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.215196 1482618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:27:21.215276 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.226411 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.239885 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.250576 1482618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:27:21.263723 1482618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:27:21.274356 1482618 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:27:21.274462 1482618 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:27:21.288126 1482618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:27:21.300772 1482618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:27:21.467651 1482618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:27:21.700509 1482618 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:27:21.700618 1482618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:27:21.708118 1482618 start.go:543] Will wait 60s for crictl version
	I1225 13:27:21.708207 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:21.712687 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:27:21.768465 1482618 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:27:21.768563 1482618 ssh_runner.go:195] Run: crio --version
	I1225 13:27:21.836834 1482618 ssh_runner.go:195] Run: crio --version
	I1225 13:27:21.907627 1482618 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1225 13:27:21.288635 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:21.288669 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:21.288685 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:21.374966 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:21.375010 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:21.760268 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:21.771864 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:21.771898 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:22.259417 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:22.271720 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:22.271779 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:22.760217 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:22.767295 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:22.767333 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:23.259377 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:23.265348 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 200:
	ok
	I1225 13:27:23.275974 1484104 api_server.go:141] control plane version: v1.28.4
	I1225 13:27:23.276010 1484104 api_server.go:131] duration metric: took 5.01669783s to wait for apiserver health ...
	I1225 13:27:23.276024 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:27:23.276033 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:23.278354 1484104 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:23.279804 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:23.300762 1484104 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:23.326548 1484104 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:23.346826 1484104 system_pods.go:59] 8 kube-system pods found
	I1225 13:27:23.346871 1484104 system_pods.go:61] "coredns-5dd5756b68-l7qnn" [860c88a5-5bb9-4556-814a-08f1cc882c0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:23.346884 1484104 system_pods.go:61] "etcd-default-k8s-diff-port-344803" [eca3b322-fbba-4d8e-b8be-10b7f552bd32] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:27:23.346896 1484104 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-344803" [730b8b80-bf80-4769-b4cd-7e81b0600599] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:27:23.346908 1484104 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-344803" [8424df4f-e2d8-4f22-8593-21cf0ccc82eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:27:23.346965 1484104 system_pods.go:61] "kube-proxy-wnjn2" [ed9e8d7e-d237-46ab-84d1-a78f7f931aab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:27:23.346988 1484104 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-344803" [f865e5a4-4b21-4d15-a437-47965f0d1db8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:27:23.347009 1484104 system_pods.go:61] "metrics-server-57f55c9bc5-zgrj5" [d52789c5-dfe7-48e6-9dfd-a7dc5b5be6ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:27:23.347099 1484104 system_pods.go:61] "storage-provisioner" [96723fff-956b-42c4-864b-b18afb0c0285] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 13:27:23.347116 1484104 system_pods.go:74] duration metric: took 20.540773ms to wait for pod list to return data ...
	I1225 13:27:23.347135 1484104 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:23.358619 1484104 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:23.358673 1484104 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:23.358690 1484104 node_conditions.go:105] duration metric: took 11.539548ms to run NodePressure ...
	I1225 13:27:23.358716 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:23.795558 1484104 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:23.804103 1484104 kubeadm.go:787] kubelet initialised
	I1225 13:27:23.804125 1484104 kubeadm.go:788] duration metric: took 8.535185ms waiting for restarted kubelet to initialise ...
	I1225 13:27:23.804133 1484104 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:23.814199 1484104 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:20.557056 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:22.569215 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:25.054111 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:21.909021 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:21.912423 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:21.912802 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:21.912828 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:21.913199 1482618 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 13:27:21.917615 1482618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:21.931709 1482618 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1225 13:27:21.931830 1482618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:21.991133 1482618 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1225 13:27:21.991246 1482618 ssh_runner.go:195] Run: which lz4
	I1225 13:27:21.997721 1482618 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:27:22.003171 1482618 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:27:22.003218 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1225 13:27:23.975639 1482618 crio.go:444] Took 1.977982 seconds to copy over tarball
	I1225 13:27:23.975723 1482618 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:27:21.643721 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:24.132742 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:25.827617 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:28.322507 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:27.055526 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:29.558580 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:27.243294 1482618 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.267535049s)
	I1225 13:27:27.243339 1482618 crio.go:451] Took 3.267670 seconds to extract the tarball
	I1225 13:27:27.243368 1482618 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:27:27.285528 1482618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:27.338914 1482618 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1225 13:27:27.338948 1482618 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1225 13:27:27.339078 1482618 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.339115 1482618 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.339118 1482618 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.339160 1482618 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.339114 1482618 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.339054 1482618 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.339059 1482618 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1225 13:27:27.339060 1482618 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.340631 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.340647 1482618 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.340658 1482618 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.340632 1482618 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1225 13:27:27.340630 1482618 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.340666 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.340630 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.340635 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.502560 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.502567 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.510502 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1225 13:27:27.513052 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.518668 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.522882 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.553027 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.608178 1482618 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1225 13:27:27.608235 1482618 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.608294 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.655271 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.671173 1482618 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1225 13:27:27.671223 1482618 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1225 13:27:27.671283 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.671290 1482618 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1225 13:27:27.671330 1482618 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.671378 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.728043 1482618 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1225 13:27:27.728102 1482618 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.728139 1482618 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1225 13:27:27.728159 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.728187 1482618 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.728222 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.739034 1482618 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1225 13:27:27.739077 1482618 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.739133 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.739156 1482618 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1225 13:27:27.739205 1482618 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.739213 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.739261 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.858062 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1225 13:27:27.858089 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.858143 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.858175 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.858237 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.858301 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1225 13:27:27.858358 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:28.004051 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1225 13:27:28.004125 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1225 13:27:28.004183 1482618 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1225 13:27:28.004226 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1225 13:27:28.004304 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1225 13:27:28.004369 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1225 13:27:28.005012 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1225 13:27:28.009472 1482618 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1225 13:27:28.009491 1482618 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1225 13:27:28.009550 1482618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1225 13:27:29.560553 1482618 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.550970125s)
	I1225 13:27:29.560586 1482618 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1225 13:27:29.560668 1482618 cache_images.go:92] LoadImages completed in 2.22170407s
	W1225 13:27:29.560766 1482618 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1225 13:27:29.560846 1482618 ssh_runner.go:195] Run: crio config
	I1225 13:27:29.639267 1482618 cni.go:84] Creating CNI manager for ""
	I1225 13:27:29.639298 1482618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:29.639324 1482618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:27:29.639375 1482618 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-198979 NodeName:old-k8s-version-198979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1225 13:27:29.639598 1482618 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-198979"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-198979
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.186:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:27:29.639711 1482618 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-198979 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-198979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:27:29.639800 1482618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1225 13:27:29.649536 1482618 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:27:29.649614 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:27:29.658251 1482618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1225 13:27:29.678532 1482618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:27:29.698314 1482618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1225 13:27:29.718873 1482618 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I1225 13:27:29.723656 1482618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:29.737736 1482618 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979 for IP: 192.168.39.186
	I1225 13:27:29.737787 1482618 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:29.738006 1482618 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:27:29.738069 1482618 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:27:29.738147 1482618 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.key
	I1225 13:27:29.738211 1482618 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.key.d0691019
	I1225 13:27:29.738252 1482618 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.key
	I1225 13:27:29.738456 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:27:29.738501 1482618 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:27:29.738511 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:27:29.738543 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:27:29.738578 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:27:29.738617 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:27:29.738682 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:29.739444 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:27:29.765303 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:27:29.790702 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:27:29.818835 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 13:27:29.845659 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:27:29.872043 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:27:29.902732 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:27:29.928410 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:27:29.954350 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:27:29.978557 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:27:30.007243 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:27:30.036876 1482618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:27:30.055990 1482618 ssh_runner.go:195] Run: openssl version
	I1225 13:27:30.062813 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:27:30.075937 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.082034 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.082145 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.089645 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:27:30.102657 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:27:30.115701 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.120635 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.120711 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.128051 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:27:30.139465 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:27:30.151046 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.156574 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.156656 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.162736 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:27:30.174356 1482618 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:27:30.180962 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:27:30.187746 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:27:30.194481 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:27:30.202279 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:27:30.210555 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:27:30.218734 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:27:30.225325 1482618 kubeadm.go:404] StartCluster: {Name:old-k8s-version-198979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-198979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:27:30.225424 1482618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:27:30.225478 1482618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:30.274739 1482618 cri.go:89] found id: ""
	I1225 13:27:30.274842 1482618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:27:30.285949 1482618 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:27:30.285980 1482618 kubeadm.go:636] restartCluster start
	I1225 13:27:30.286051 1482618 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:27:30.295521 1482618 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:30.296804 1482618 kubeconfig.go:92] found "old-k8s-version-198979" server: "https://192.168.39.186:8443"
	I1225 13:27:30.299493 1482618 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:27:30.308641 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:30.308745 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:30.320654 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:26.631365 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:29.129943 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:31.131590 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:30.329682 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:31.824743 1484104 pod_ready.go:92] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:31.824770 1484104 pod_ready.go:81] duration metric: took 8.010540801s waiting for pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.824781 1484104 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.830321 1484104 pod_ready.go:92] pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:31.830347 1484104 pod_ready.go:81] duration metric: took 5.559816ms waiting for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.830358 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.338865 1484104 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:32.338898 1484104 pod_ready.go:81] duration metric: took 508.532498ms waiting for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.338913 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.846030 1484104 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:33.846054 1484104 pod_ready.go:81] duration metric: took 1.507133449s waiting for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.846065 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wnjn2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.851826 1484104 pod_ready.go:92] pod "kube-proxy-wnjn2" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:33.851846 1484104 pod_ready.go:81] duration metric: took 5.775207ms waiting for pod "kube-proxy-wnjn2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.851855 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.054359 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:34.054586 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:30.809359 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:30.809482 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:30.821194 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:31.308690 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:31.308830 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:31.322775 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:31.809511 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:31.809612 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:31.823928 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:32.309450 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:32.309569 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:32.320937 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:32.809587 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:32.809686 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:32.822957 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.308905 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:33.308992 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:33.321195 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.808702 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:33.808803 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:33.820073 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:34.309661 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:34.309760 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:34.322931 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:34.809599 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:34.809724 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:34.825650 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:35.308697 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:35.308798 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:35.321313 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.630973 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.128884 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:35.859839 1484104 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.359809 1484104 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:36.359838 1484104 pod_ready.go:81] duration metric: took 2.507975576s waiting for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:36.359853 1484104 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:38.371707 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.554699 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:39.053732 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:35.809083 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:35.809186 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:35.821434 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:36.309100 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:36.309181 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:36.322566 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:36.809026 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:36.809136 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:36.820791 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:37.309382 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:37.309501 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:37.321365 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:37.809397 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:37.809515 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:37.821538 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:38.309716 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:38.309819 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:38.321060 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:38.809627 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:38.809728 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:38.821784 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:39.309363 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:39.309483 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:39.320881 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:39.809420 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:39.809597 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:39.820752 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:40.308911 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:40.309009 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:40.322568 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:40.322614 1482618 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:27:40.322653 1482618 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:27:40.322670 1482618 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:27:40.322730 1482618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:40.366271 1482618 cri.go:89] found id: ""
	I1225 13:27:40.366365 1482618 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:27:40.383123 1482618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:27:40.392329 1482618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:27:40.392412 1482618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:40.401435 1482618 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:40.401471 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:38.131920 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.629516 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.868311 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:42.872952 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:41.054026 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:43.054332 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.538996 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.466467 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.697265 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.796796 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.898179 1482618 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:27:41.898290 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:42.398616 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:42.899373 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.399246 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.898788 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.923617 1482618 api_server.go:72] duration metric: took 2.025431683s to wait for apiserver process to appear ...
	I1225 13:27:43.923650 1482618 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:27:43.923684 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:42.632296 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.128501 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.368613 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:47.868011 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.054778 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:47.559938 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:48.924695 1482618 api_server.go:269] stopped: https://192.168.39.186:8443/healthz: Get "https://192.168.39.186:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 13:27:48.924755 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:49.954284 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:49.954379 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:49.954401 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:49.985515 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:49.985568 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:50.424616 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:50.431560 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1225 13:27:50.431604 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1225 13:27:50.924173 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:50.935578 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1225 13:27:50.935622 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1225 13:27:51.424341 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:51.431709 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1225 13:27:51.440822 1482618 api_server.go:141] control plane version: v1.16.0
	I1225 13:27:51.440855 1482618 api_server.go:131] duration metric: took 7.517198191s to wait for apiserver health ...
	I1225 13:27:51.440866 1482618 cni.go:84] Creating CNI manager for ""
	I1225 13:27:51.440873 1482618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:51.442446 1482618 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:47.130936 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:49.132275 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:51.443830 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:51.456628 1482618 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:51.477822 1482618 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:51.487046 1482618 system_pods.go:59] 7 kube-system pods found
	I1225 13:27:51.487082 1482618 system_pods.go:61] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:27:51.487087 1482618 system_pods.go:61] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:27:51.487091 1482618 system_pods.go:61] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:27:51.487096 1482618 system_pods.go:61] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Pending
	I1225 13:27:51.487100 1482618 system_pods.go:61] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:27:51.487103 1482618 system_pods.go:61] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:27:51.487107 1482618 system_pods.go:61] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:27:51.487113 1482618 system_pods.go:74] duration metric: took 9.266811ms to wait for pod list to return data ...
	I1225 13:27:51.487120 1482618 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:51.491782 1482618 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:51.491817 1482618 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:51.491831 1482618 node_conditions.go:105] duration metric: took 4.70597ms to run NodePressure ...
	I1225 13:27:51.491855 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:51.768658 1482618 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:51.776258 1482618 kubeadm.go:787] kubelet initialised
	I1225 13:27:51.776283 1482618 kubeadm.go:788] duration metric: took 7.588357ms waiting for restarted kubelet to initialise ...
	I1225 13:27:51.776293 1482618 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:51.784053 1482618 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.791273 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.791314 1482618 pod_ready.go:81] duration metric: took 7.223677ms waiting for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.791328 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.791338 1482618 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.801453 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "etcd-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.801491 1482618 pod_ready.go:81] duration metric: took 10.138221ms waiting for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.801505 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "etcd-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.801514 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.809536 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.809577 1482618 pod_ready.go:81] duration metric: took 8.051285ms waiting for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.809590 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.809608 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.882231 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.882268 1482618 pod_ready.go:81] duration metric: took 72.643349ms waiting for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.882299 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.882309 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:52.282486 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-proxy-vw9lf" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.282531 1482618 pod_ready.go:81] duration metric: took 400.208562ms waiting for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:52.282543 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-proxy-vw9lf" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.282552 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:52.689279 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.689329 1482618 pod_ready.go:81] duration metric: took 406.764819ms waiting for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:52.689343 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.689353 1482618 pod_ready.go:38] duration metric: took 913.049281ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:52.689387 1482618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:27:52.705601 1482618 ops.go:34] apiserver oom_adj: -16
	I1225 13:27:52.705628 1482618 kubeadm.go:640] restartCluster took 22.419638621s
	I1225 13:27:52.705639 1482618 kubeadm.go:406] StartCluster complete in 22.480335985s
	I1225 13:27:52.705663 1482618 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:52.705760 1482618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:27:52.708825 1482618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:52.709185 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:27:52.709313 1482618 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:27:52.709404 1482618 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709427 1482618 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-198979"
	W1225 13:27:52.709435 1482618 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:27:52.709443 1482618 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709460 1482618 config.go:182] Loaded profile config "old-k8s-version-198979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1225 13:27:52.709466 1482618 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709475 1482618 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-198979"
	I1225 13:27:52.709482 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.709488 1482618 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-198979"
	W1225 13:27:52.709502 1482618 addons.go:246] addon metrics-server should already be in state true
	I1225 13:27:52.709553 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.709914 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.709953 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.709964 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.709992 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.709965 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.710046 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.729360 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1225 13:27:52.730016 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.730343 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I1225 13:27:52.730527 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I1225 13:27:52.730777 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.730808 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.730852 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.731329 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.731365 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.731381 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.731589 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.731638 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.731715 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.732311 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.732360 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.732731 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.732763 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.733225 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.733787 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.733859 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.735675 1482618 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-198979"
	W1225 13:27:52.735694 1482618 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:27:52.735725 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.736079 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.736117 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.751072 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40177
	I1225 13:27:52.752097 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.753002 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.753022 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.753502 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.753741 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.756158 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.758410 1482618 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:52.758080 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42869
	I1225 13:27:52.759927 1482618 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:52.759942 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:27:52.759963 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.760521 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.761648 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.761665 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.762046 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.762823 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.762872 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.763974 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.764712 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.764748 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.764752 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.765009 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.765216 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I1225 13:27:52.765216 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.765461 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.791493 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.792265 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.792294 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.792795 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.793023 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.795238 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.799536 1482618 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:27:52.800892 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:27:52.800920 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:27:52.800955 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.804762 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.806571 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.806568 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.806606 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.806957 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.807115 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.807260 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.811419 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I1225 13:27:52.811816 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.812352 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.812379 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.812872 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.813083 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.814823 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.815122 1482618 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:52.815138 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:27:52.815158 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.818411 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.818892 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.818926 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.819253 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.819504 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.819705 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.819981 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.963144 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:52.974697 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:27:52.974733 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:27:53.021391 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:53.039959 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:27:53.039991 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:27:53.121390 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:53.121421 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:27:53.196232 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:53.256419 1482618 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-198979" context rescaled to 1 replicas
	I1225 13:27:53.256479 1482618 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:27:53.258366 1482618 out.go:177] * Verifying Kubernetes components...
	I1225 13:27:53.259807 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:27:53.276151 1482618 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:27:53.687341 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.687374 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.687666 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.687690 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.687701 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.687710 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.689261 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.689286 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.689294 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.725954 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.725985 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.726715 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.726737 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.726743 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.726776 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.726787 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.727040 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.727054 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.727061 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.744318 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.744356 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.744696 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.744745 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.846817 1482618 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-198979" to be "Ready" ...
	I1225 13:27:53.846878 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.846899 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.847234 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.847301 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.847317 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.847329 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.847351 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.847728 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.847767 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.847793 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.847810 1482618 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-198979"
	I1225 13:27:53.850107 1482618 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:27:49.870506 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:52.369916 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:50.056130 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:52.562555 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:53.851456 1482618 addons.go:508] enable addons completed in 1.14214354s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:27:51.635205 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:54.131852 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:54.868902 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:57.367267 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:59.368997 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:55.057522 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:57.555214 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:55.851206 1482618 node_ready.go:58] node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:58.350906 1482618 node_ready.go:58] node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:28:00.350892 1482618 node_ready.go:49] node "old-k8s-version-198979" has status "Ready":"True"
	I1225 13:28:00.350918 1482618 node_ready.go:38] duration metric: took 6.504066205s waiting for node "old-k8s-version-198979" to be "Ready" ...
	I1225 13:28:00.350928 1482618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:28:00.355882 1482618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.362249 1482618 pod_ready.go:92] pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.362281 1482618 pod_ready.go:81] duration metric: took 6.362168ms waiting for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.362290 1482618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.367738 1482618 pod_ready.go:92] pod "etcd-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.367777 1482618 pod_ready.go:81] duration metric: took 5.478984ms waiting for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.367790 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.373724 1482618 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.373754 1482618 pod_ready.go:81] duration metric: took 5.95479ms waiting for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.373774 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.380810 1482618 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.380841 1482618 pod_ready.go:81] duration metric: took 7.058206ms waiting for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.380854 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:56.635216 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:59.129464 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:01.132131 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:00.750612 1482618 pod_ready.go:92] pod "kube-proxy-vw9lf" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.750641 1482618 pod_ready.go:81] duration metric: took 369.779347ms waiting for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.750651 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:01.151567 1482618 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:01.151596 1482618 pod_ready.go:81] duration metric: took 400.937167ms waiting for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:01.151617 1482618 pod_ready.go:38] duration metric: took 800.677743ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:28:01.151634 1482618 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:28:01.151694 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:28:01.170319 1482618 api_server.go:72] duration metric: took 7.913795186s to wait for apiserver process to appear ...
	I1225 13:28:01.170349 1482618 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:28:01.170368 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:28:01.177133 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1225 13:28:01.178326 1482618 api_server.go:141] control plane version: v1.16.0
	I1225 13:28:01.178351 1482618 api_server.go:131] duration metric: took 7.994163ms to wait for apiserver health ...
	I1225 13:28:01.178361 1482618 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:28:01.352663 1482618 system_pods.go:59] 7 kube-system pods found
	I1225 13:28:01.352693 1482618 system_pods.go:61] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:28:01.352697 1482618 system_pods.go:61] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:28:01.352702 1482618 system_pods.go:61] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:28:01.352706 1482618 system_pods.go:61] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Running
	I1225 13:28:01.352710 1482618 system_pods.go:61] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:28:01.352714 1482618 system_pods.go:61] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:28:01.352718 1482618 system_pods.go:61] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:28:01.352724 1482618 system_pods.go:74] duration metric: took 174.35745ms to wait for pod list to return data ...
	I1225 13:28:01.352731 1482618 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:28:01.554095 1482618 default_sa.go:45] found service account: "default"
	I1225 13:28:01.554129 1482618 default_sa.go:55] duration metric: took 201.391529ms for default service account to be created ...
	I1225 13:28:01.554139 1482618 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:28:01.757666 1482618 system_pods.go:86] 7 kube-system pods found
	I1225 13:28:01.757712 1482618 system_pods.go:89] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:28:01.757724 1482618 system_pods.go:89] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:28:01.757731 1482618 system_pods.go:89] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:28:01.757747 1482618 system_pods.go:89] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Running
	I1225 13:28:01.757754 1482618 system_pods.go:89] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:28:01.757763 1482618 system_pods.go:89] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:28:01.757769 1482618 system_pods.go:89] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:28:01.757785 1482618 system_pods.go:126] duration metric: took 203.63938ms to wait for k8s-apps to be running ...
	I1225 13:28:01.757800 1482618 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:28:01.757863 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:28:01.771792 1482618 system_svc.go:56] duration metric: took 13.980705ms WaitForService to wait for kubelet.
	I1225 13:28:01.771821 1482618 kubeadm.go:581] duration metric: took 8.515309843s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:28:01.771843 1482618 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:28:01.952426 1482618 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:28:01.952463 1482618 node_conditions.go:123] node cpu capacity is 2
	I1225 13:28:01.952477 1482618 node_conditions.go:105] duration metric: took 180.629128ms to run NodePressure ...
	I1225 13:28:01.952493 1482618 start.go:228] waiting for startup goroutines ...
	I1225 13:28:01.952500 1482618 start.go:233] waiting for cluster config update ...
	I1225 13:28:01.952512 1482618 start.go:242] writing updated cluster config ...
	I1225 13:28:01.952974 1482618 ssh_runner.go:195] Run: rm -f paused
	I1225 13:28:02.007549 1482618 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I1225 13:28:02.009559 1482618 out.go:177] 
	W1225 13:28:02.011242 1482618 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I1225 13:28:02.012738 1482618 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1225 13:28:02.014029 1482618 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-198979" cluster and "default" namespace by default
	I1225 13:28:01.869370 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:04.368824 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:00.055713 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:02.553981 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:04.554824 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:03.629358 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:06.130616 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:06.869993 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:09.367869 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:07.054835 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:09.554904 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:08.130786 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:10.632435 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:11.368789 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:13.867665 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:12.054007 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:14.554676 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:13.129854 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:15.628997 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:15.869048 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:18.368070 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:16.557633 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:19.054486 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:17.629072 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:20.129902 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:20.868173 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:22.868637 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:21.555027 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:24.054858 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:22.133148 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:24.630133 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:25.369437 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:27.870029 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:26.056198 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:28.555876 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:27.129583 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:29.629963 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:30.367773 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:32.368497 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:34.369791 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:31.053212 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:33.054315 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:32.128310 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:34.130650 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:36.869325 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:39.367488 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:35.056761 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:37.554917 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:36.632857 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:39.129518 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:41.368425 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:43.868157 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:40.054854 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:42.555015 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:45.053900 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:41.630558 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:44.132072 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:46.366422 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:48.368331 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:47.056378 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:49.555186 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:46.629415 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:49.129249 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:51.129692 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:50.868321 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:53.366805 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:52.053785 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:54.057533 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:53.629427 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:55.629652 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:55.368197 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:57.867659 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.868187 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:56.556558 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.055474 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:57.629912 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.630858 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:01.868360 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:03.870936 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:01.555132 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:04.053887 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:02.127901 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:04.131186 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.367634 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:08.867571 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.054546 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:08.554559 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.629995 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:09.129898 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:10.868677 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:12.868979 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:11.055554 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:13.554637 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:11.629511 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:14.129806 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:14.872549 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:17.371705 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:19.868438 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:16.054016 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:18.055476 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:16.629688 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:18.630125 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:21.132102 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:22.367525 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:24.369464 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:20.554660 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:22.556044 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:25.054213 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:23.630061 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:26.132281 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:26.868977 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:29.367384 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:27.055844 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:29.554124 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:28.630474 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:30.631070 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:31.367691 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:33.867941 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:31.555167 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:33.557066 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:32.634599 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:35.131402 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:36.369081 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:38.868497 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:36.054764 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:38.054975 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:37.629895 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:39.630456 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:41.366745 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:43.367883 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:40.554998 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:42.555257 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:42.130638 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:44.629851 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:45.371692 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:47.866965 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:49.868100 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:45.057506 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:47.555247 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:46.632874 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:49.129782 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:51.130176 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:51.868818 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:53.868968 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:50.055939 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:52.556609 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:55.054048 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:53.132556 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:55.632608 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:56.368065 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:58.868076 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:57.054224 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:59.554940 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:58.128545 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:00.129437 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:00.868364 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:03.368093 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:02.054215 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:04.056019 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:02.129706 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:04.130092 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:05.867992 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:07.872121 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:06.554889 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:09.056197 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:06.630974 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:08.632171 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:11.128952 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:10.367536 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:12.369331 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:11.554738 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:13.555681 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:13.129878 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:15.130470 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:14.868630 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:17.367768 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:19.368295 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:16.054391 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:18.054606 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:17.630479 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:19.630971 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:21.873194 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:24.368931 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:20.054866 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:22.554974 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:25.053696 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:22.130831 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:24.630755 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:26.867555 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:28.868612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:27.054706 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:29.055614 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:27.133840 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:29.630572 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:30.868716 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:33.369710 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:31.554882 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:33.556367 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:32.129865 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:34.129987 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:35.870671 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:38.367237 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:35.557755 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:37.559481 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:36.630513 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:39.130271 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:40.368072 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:42.869043 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:40.055427 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:42.554787 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:45.053876 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:41.629178 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:43.630237 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:45.631199 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:44.873439 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:47.367548 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:49.368066 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:47.555106 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:49.556132 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:48.130206 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:50.629041 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:51.369311 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:53.870853 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:52.055511 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:54.061135 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:52.630215 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:55.130153 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:55.873755 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:58.367682 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:56.554861 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:59.054344 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:57.629571 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:59.630560 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:00.372506 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:02.867084 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:01.554332 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:03.554717 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.555955 1483118 pod_ready.go:81] duration metric: took 4m0.009196678s waiting for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:04.555987 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:31:04.555994 1483118 pod_ready.go:38] duration metric: took 4m2.890580557s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:04.556014 1483118 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:31:04.556050 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:04.556152 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:04.615717 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:04.615748 1483118 cri.go:89] found id: ""
	I1225 13:31:04.615759 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:04.615830 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.621669 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:04.621778 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:04.661088 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:04.661127 1483118 cri.go:89] found id: ""
	I1225 13:31:04.661139 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:04.661191 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.666410 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:04.666496 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:04.710927 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:04.710962 1483118 cri.go:89] found id: ""
	I1225 13:31:04.710973 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:04.711041 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.715505 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:04.715587 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:04.761494 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:04.761518 1483118 cri.go:89] found id: ""
	I1225 13:31:04.761527 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:04.761580 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.766925 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:04.767015 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:04.810640 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:04.810670 1483118 cri.go:89] found id: ""
	I1225 13:31:04.810685 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:04.810753 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.815190 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:04.815285 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:04.858275 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:04.858301 1483118 cri.go:89] found id: ""
	I1225 13:31:04.858309 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:04.858362 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.863435 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:04.863529 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:04.914544 1483118 cri.go:89] found id: ""
	I1225 13:31:04.914583 1483118 logs.go:284] 0 containers: []
	W1225 13:31:04.914594 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:04.914603 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:04.914675 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:04.969548 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:04.969577 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:04.969584 1483118 cri.go:89] found id: ""
	I1225 13:31:04.969594 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:04.969660 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.974172 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.978956 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:04.978989 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:05.033590 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:05.033632 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:02.133447 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.630226 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.869025 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:07.368392 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:09.369061 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:05.085851 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:05.085879 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:05.144002 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:05.144047 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:05.191669 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:05.191703 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:05.238581 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:05.238617 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:05.253236 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:05.253271 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:05.293626 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:05.293674 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:05.338584 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:05.338622 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:05.381135 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:05.381172 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:05.886860 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:05.886918 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:06.045040 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:06.045080 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:06.101152 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:06.101192 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:08.662518 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:31:08.678649 1483118 api_server.go:72] duration metric: took 4m14.820531999s to wait for apiserver process to appear ...
	I1225 13:31:08.678687 1483118 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:31:08.678729 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:08.678791 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:08.718202 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:08.718246 1483118 cri.go:89] found id: ""
	I1225 13:31:08.718255 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:08.718305 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.723089 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:08.723177 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:08.772619 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:08.772641 1483118 cri.go:89] found id: ""
	I1225 13:31:08.772649 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:08.772709 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.777577 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:08.777669 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:08.818869 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:08.818900 1483118 cri.go:89] found id: ""
	I1225 13:31:08.818910 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:08.818970 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.823301 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:08.823382 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:08.868885 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:08.868913 1483118 cri.go:89] found id: ""
	I1225 13:31:08.868924 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:08.868982 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.873489 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:08.873562 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:08.916925 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:08.916957 1483118 cri.go:89] found id: ""
	I1225 13:31:08.916967 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:08.917065 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.921808 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:08.921901 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:08.961586 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:08.961617 1483118 cri.go:89] found id: ""
	I1225 13:31:08.961628 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:08.961707 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.965986 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:08.966096 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:09.012223 1483118 cri.go:89] found id: ""
	I1225 13:31:09.012262 1483118 logs.go:284] 0 containers: []
	W1225 13:31:09.012270 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:09.012278 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:09.012343 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:09.060646 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:09.060675 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:09.060683 1483118 cri.go:89] found id: ""
	I1225 13:31:09.060694 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:09.060767 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:09.065955 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:09.070859 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:09.070890 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:09.128056 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:09.128096 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:09.179304 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:09.179341 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:09.194019 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:09.194048 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:09.339697 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:09.339743 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:09.389626 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:09.389669 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:09.831437 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:09.831498 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:09.888799 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:09.888848 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:09.932201 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:09.932232 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:09.983201 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:09.983242 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:10.039094 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:10.039149 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:06.630567 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:09.130605 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:11.369445 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:13.870404 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:10.095628 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:10.095677 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:10.139678 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:10.139717 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:12.688297 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:31:12.693469 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 200:
	ok
	I1225 13:31:12.694766 1483118 api_server.go:141] control plane version: v1.29.0-rc.2
	I1225 13:31:12.694788 1483118 api_server.go:131] duration metric: took 4.016094906s to wait for apiserver health ...
	I1225 13:31:12.694796 1483118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:31:12.694821 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:12.694876 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:12.743143 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:12.743174 1483118 cri.go:89] found id: ""
	I1225 13:31:12.743185 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:12.743238 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.747708 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:12.747803 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:12.800511 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:12.800540 1483118 cri.go:89] found id: ""
	I1225 13:31:12.800549 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:12.800612 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.805236 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:12.805308 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:12.850047 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:12.850081 1483118 cri.go:89] found id: ""
	I1225 13:31:12.850092 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:12.850152 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.854516 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:12.854602 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:12.902131 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:12.902162 1483118 cri.go:89] found id: ""
	I1225 13:31:12.902173 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:12.902239 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.907546 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:12.907634 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:12.966561 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:12.966590 1483118 cri.go:89] found id: ""
	I1225 13:31:12.966601 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:12.966674 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.971071 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:12.971161 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:13.026823 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:13.026851 1483118 cri.go:89] found id: ""
	I1225 13:31:13.026862 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:13.026927 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.031499 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:13.031576 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:13.077486 1483118 cri.go:89] found id: ""
	I1225 13:31:13.077512 1483118 logs.go:284] 0 containers: []
	W1225 13:31:13.077520 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:13.077526 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:13.077589 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:13.130262 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:13.130287 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:13.130294 1483118 cri.go:89] found id: ""
	I1225 13:31:13.130305 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:13.130364 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.138345 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.142749 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:13.142780 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:13.264652 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:13.264694 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:13.315138 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:13.315182 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:13.375532 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:13.375570 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:13.418188 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:13.418226 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:13.433392 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:13.433423 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:13.472447 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:13.472481 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:13.514578 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:13.514631 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:13.568962 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:13.569001 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:13.609819 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:13.609864 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:13.668114 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:13.668160 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:13.710116 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:13.710155 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:14.068484 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:14.068548 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:11.629829 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:13.632277 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:15.629964 1483946 pod_ready.go:81] duration metric: took 4m0.008391697s waiting for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:15.629997 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:31:15.630006 1483946 pod_ready.go:38] duration metric: took 4m4.430454443s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:15.630022 1483946 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:31:15.630052 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:15.630113 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:15.694629 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:15.694654 1483946 cri.go:89] found id: ""
	I1225 13:31:15.694666 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:15.694735 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.699777 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:15.699847 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:15.744267 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:15.744299 1483946 cri.go:89] found id: ""
	I1225 13:31:15.744308 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:15.744361 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.749213 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:15.749310 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:15.796903 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:15.796930 1483946 cri.go:89] found id: ""
	I1225 13:31:15.796939 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:15.797001 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.801601 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:15.801673 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:15.841792 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:15.841820 1483946 cri.go:89] found id: ""
	I1225 13:31:15.841830 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:15.841902 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.845893 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:15.845970 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:15.901462 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:15.901493 1483946 cri.go:89] found id: ""
	I1225 13:31:15.901505 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:15.901589 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.907173 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:15.907264 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:15.957143 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:15.957177 1483946 cri.go:89] found id: ""
	I1225 13:31:15.957186 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:15.957239 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.962715 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:15.962789 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:16.007949 1483946 cri.go:89] found id: ""
	I1225 13:31:16.007988 1483946 logs.go:284] 0 containers: []
	W1225 13:31:16.007999 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:16.008008 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:16.008076 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:16.063958 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:16.063984 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:16.063989 1483946 cri.go:89] found id: ""
	I1225 13:31:16.063997 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:16.064052 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:16.069193 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:16.074310 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:16.074333 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:16.120318 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:16.120363 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:16.176217 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:16.176264 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:16.633470 1483118 system_pods.go:59] 8 kube-system pods found
	I1225 13:31:16.633507 1483118 system_pods.go:61] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running
	I1225 13:31:16.633512 1483118 system_pods.go:61] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running
	I1225 13:31:16.633516 1483118 system_pods.go:61] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running
	I1225 13:31:16.633521 1483118 system_pods.go:61] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running
	I1225 13:31:16.633525 1483118 system_pods.go:61] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running
	I1225 13:31:16.633529 1483118 system_pods.go:61] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running
	I1225 13:31:16.633536 1483118 system_pods.go:61] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:16.633541 1483118 system_pods.go:61] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running
	I1225 13:31:16.633548 1483118 system_pods.go:74] duration metric: took 3.938745899s to wait for pod list to return data ...
	I1225 13:31:16.633556 1483118 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:31:16.637279 1483118 default_sa.go:45] found service account: "default"
	I1225 13:31:16.637314 1483118 default_sa.go:55] duration metric: took 3.749637ms for default service account to be created ...
	I1225 13:31:16.637325 1483118 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:31:16.644466 1483118 system_pods.go:86] 8 kube-system pods found
	I1225 13:31:16.644501 1483118 system_pods.go:89] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running
	I1225 13:31:16.644509 1483118 system_pods.go:89] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running
	I1225 13:31:16.644516 1483118 system_pods.go:89] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running
	I1225 13:31:16.644523 1483118 system_pods.go:89] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running
	I1225 13:31:16.644530 1483118 system_pods.go:89] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running
	I1225 13:31:16.644536 1483118 system_pods.go:89] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running
	I1225 13:31:16.644547 1483118 system_pods.go:89] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:16.644558 1483118 system_pods.go:89] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running
	I1225 13:31:16.644583 1483118 system_pods.go:126] duration metric: took 7.250639ms to wait for k8s-apps to be running ...
	I1225 13:31:16.644594 1483118 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:31:16.644658 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:16.661680 1483118 system_svc.go:56] duration metric: took 17.070893ms WaitForService to wait for kubelet.
	I1225 13:31:16.661723 1483118 kubeadm.go:581] duration metric: took 4m22.80360778s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:31:16.661754 1483118 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:31:16.666189 1483118 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:31:16.666227 1483118 node_conditions.go:123] node cpu capacity is 2
	I1225 13:31:16.666294 1483118 node_conditions.go:105] duration metric: took 4.531137ms to run NodePressure ...
	I1225 13:31:16.666313 1483118 start.go:228] waiting for startup goroutines ...
	I1225 13:31:16.666323 1483118 start.go:233] waiting for cluster config update ...
	I1225 13:31:16.666338 1483118 start.go:242] writing updated cluster config ...
	I1225 13:31:16.666702 1483118 ssh_runner.go:195] Run: rm -f paused
	I1225 13:31:16.729077 1483118 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I1225 13:31:16.732824 1483118 out.go:177] * Done! kubectl is now configured to use "no-preload-330063" cluster and "default" namespace by default
	I1225 13:31:16.368392 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:18.374788 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:16.686611 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:16.686650 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:16.748667 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:16.748705 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:16.937661 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:16.937700 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:16.988870 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:16.988908 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:17.048278 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:17.048316 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:17.095857 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:17.095900 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:17.135425 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:17.135460 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:17.197626 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:17.197670 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:17.213658 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:17.213695 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:17.282101 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:17.282149 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:19.824939 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:31:19.840944 1483946 api_server.go:72] duration metric: took 4m11.866743679s to wait for apiserver process to appear ...
	I1225 13:31:19.840985 1483946 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:31:19.841036 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:19.841114 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:19.895404 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:19.895445 1483946 cri.go:89] found id: ""
	I1225 13:31:19.895455 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:19.895519 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.900604 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:19.900686 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:19.943623 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:19.943652 1483946 cri.go:89] found id: ""
	I1225 13:31:19.943662 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:19.943728 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.948230 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:19.948298 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:19.993271 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:19.993296 1483946 cri.go:89] found id: ""
	I1225 13:31:19.993304 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:19.993355 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.997702 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:19.997790 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:20.043487 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:20.043514 1483946 cri.go:89] found id: ""
	I1225 13:31:20.043525 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:20.043591 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.047665 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:20.047748 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:20.091832 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:20.091867 1483946 cri.go:89] found id: ""
	I1225 13:31:20.091878 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:20.091947 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.096400 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:20.096463 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:20.136753 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:20.136785 1483946 cri.go:89] found id: ""
	I1225 13:31:20.136794 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:20.136867 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.141479 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:20.141559 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:20.184635 1483946 cri.go:89] found id: ""
	I1225 13:31:20.184677 1483946 logs.go:284] 0 containers: []
	W1225 13:31:20.184688 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:20.184694 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:20.184770 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:20.231891 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:20.231918 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:20.231923 1483946 cri.go:89] found id: ""
	I1225 13:31:20.231932 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:20.231991 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.236669 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.240776 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:20.240804 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:20.305411 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:20.305479 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:20.376688 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:20.376729 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:20.419016 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:20.419060 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:20.465253 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:20.465288 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:20.505949 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:20.505994 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:20.565939 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:20.565995 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:20.608765 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:20.608798 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:20.646031 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:20.646076 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:20.694772 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:20.694812 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:20.710038 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:20.710074 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:20.841944 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:20.841996 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:21.267824 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:21.267884 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:20.869158 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:22.870463 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:23.834749 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:31:23.840763 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 200:
	ok
	I1225 13:31:23.842396 1483946 api_server.go:141] control plane version: v1.28.4
	I1225 13:31:23.842424 1483946 api_server.go:131] duration metric: took 4.001431078s to wait for apiserver health ...
	I1225 13:31:23.842451 1483946 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:31:23.842481 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:23.842535 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:23.901377 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:23.901409 1483946 cri.go:89] found id: ""
	I1225 13:31:23.901420 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:23.901489 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:23.906312 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:23.906382 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:23.957073 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:23.957105 1483946 cri.go:89] found id: ""
	I1225 13:31:23.957115 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:23.957175 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:23.961899 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:23.961968 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:24.009529 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:24.009575 1483946 cri.go:89] found id: ""
	I1225 13:31:24.009587 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:24.009656 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.014579 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:24.014668 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:24.059589 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:24.059618 1483946 cri.go:89] found id: ""
	I1225 13:31:24.059629 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:24.059698 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.065185 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:24.065265 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:24.123904 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:24.123932 1483946 cri.go:89] found id: ""
	I1225 13:31:24.123942 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:24.124006 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.128753 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:24.128849 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:24.172259 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:24.172285 1483946 cri.go:89] found id: ""
	I1225 13:31:24.172296 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:24.172363 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.177276 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:24.177356 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:24.223415 1483946 cri.go:89] found id: ""
	I1225 13:31:24.223445 1483946 logs.go:284] 0 containers: []
	W1225 13:31:24.223453 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:24.223459 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:24.223516 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:24.267840 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:24.267866 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:24.267870 1483946 cri.go:89] found id: ""
	I1225 13:31:24.267878 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:24.267939 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.272947 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.279183 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:24.279213 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:24.343548 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:24.343592 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:24.398275 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:24.398312 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:24.443435 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:24.443472 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:24.814711 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:24.814770 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:24.828613 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:24.828649 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:24.979501 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:24.979538 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:25.028976 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:25.029011 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:25.083148 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:25.083191 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:25.155284 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:25.155336 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:25.213437 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:25.213483 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:25.260934 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:25.260973 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:25.307395 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:25.307430 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:27.884673 1483946 system_pods.go:59] 8 kube-system pods found
	I1225 13:31:27.884702 1483946 system_pods.go:61] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running
	I1225 13:31:27.884708 1483946 system_pods.go:61] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running
	I1225 13:31:27.884713 1483946 system_pods.go:61] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running
	I1225 13:31:27.884717 1483946 system_pods.go:61] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running
	I1225 13:31:27.884721 1483946 system_pods.go:61] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running
	I1225 13:31:27.884725 1483946 system_pods.go:61] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running
	I1225 13:31:27.884731 1483946 system_pods.go:61] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:27.884737 1483946 system_pods.go:61] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:31:27.884744 1483946 system_pods.go:74] duration metric: took 4.04228589s to wait for pod list to return data ...
	I1225 13:31:27.884752 1483946 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:31:27.889125 1483946 default_sa.go:45] found service account: "default"
	I1225 13:31:27.889156 1483946 default_sa.go:55] duration metric: took 4.397454ms for default service account to be created ...
	I1225 13:31:27.889167 1483946 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:31:27.896851 1483946 system_pods.go:86] 8 kube-system pods found
	I1225 13:31:27.896879 1483946 system_pods.go:89] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running
	I1225 13:31:27.896884 1483946 system_pods.go:89] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running
	I1225 13:31:27.896889 1483946 system_pods.go:89] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running
	I1225 13:31:27.896894 1483946 system_pods.go:89] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running
	I1225 13:31:27.896898 1483946 system_pods.go:89] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running
	I1225 13:31:27.896901 1483946 system_pods.go:89] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running
	I1225 13:31:27.896908 1483946 system_pods.go:89] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:27.896912 1483946 system_pods.go:89] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:31:27.896920 1483946 system_pods.go:126] duration metric: took 7.747348ms to wait for k8s-apps to be running ...
	I1225 13:31:27.896929 1483946 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:31:27.896981 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:27.917505 1483946 system_svc.go:56] duration metric: took 20.559839ms WaitForService to wait for kubelet.
	I1225 13:31:27.917542 1483946 kubeadm.go:581] duration metric: took 4m19.94335169s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:31:27.917568 1483946 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:31:27.921689 1483946 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:31:27.921715 1483946 node_conditions.go:123] node cpu capacity is 2
	I1225 13:31:27.921797 1483946 node_conditions.go:105] duration metric: took 4.219723ms to run NodePressure ...
	I1225 13:31:27.921814 1483946 start.go:228] waiting for startup goroutines ...
	I1225 13:31:27.921825 1483946 start.go:233] waiting for cluster config update ...
	I1225 13:31:27.921838 1483946 start.go:242] writing updated cluster config ...
	I1225 13:31:27.922130 1483946 ssh_runner.go:195] Run: rm -f paused
	I1225 13:31:27.976011 1483946 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 13:31:27.978077 1483946 out.go:177] * Done! kubectl is now configured to use "embed-certs-880612" cluster and "default" namespace by default
	I1225 13:31:24.870628 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:26.873379 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:29.367512 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:31.367730 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:33.867551 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:36.360292 1484104 pod_ready.go:81] duration metric: took 4m0.000407846s waiting for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:36.360349 1484104 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" (will not retry!)
	I1225 13:31:36.360378 1484104 pod_ready.go:38] duration metric: took 4m12.556234617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:36.360445 1484104 kubeadm.go:640] restartCluster took 4m32.941510355s
	W1225 13:31:36.360540 1484104 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1225 13:31:36.360578 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1225 13:31:50.552320 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.191703988s)
	I1225 13:31:50.552417 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:50.569621 1484104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:31:50.581050 1484104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:31:50.591777 1484104 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:31:50.591837 1484104 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1225 13:31:50.651874 1484104 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1225 13:31:50.651952 1484104 kubeadm.go:322] [preflight] Running pre-flight checks
	I1225 13:31:50.822009 1484104 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 13:31:50.822174 1484104 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 13:31:50.822258 1484104 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1225 13:31:51.074237 1484104 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 13:31:51.077463 1484104 out.go:204]   - Generating certificates and keys ...
	I1225 13:31:51.077575 1484104 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1225 13:31:51.077637 1484104 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1225 13:31:51.077703 1484104 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1225 13:31:51.077755 1484104 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1225 13:31:51.077816 1484104 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1225 13:31:51.077908 1484104 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1225 13:31:51.078059 1484104 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1225 13:31:51.078715 1484104 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1225 13:31:51.079408 1484104 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1225 13:31:51.080169 1484104 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1225 13:31:51.080635 1484104 kubeadm.go:322] [certs] Using the existing "sa" key
	I1225 13:31:51.080724 1484104 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 13:31:51.147373 1484104 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 13:31:51.298473 1484104 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 13:31:51.403869 1484104 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 13:31:51.719828 1484104 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 13:31:51.720523 1484104 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 13:31:51.725276 1484104 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 13:31:51.727100 1484104 out.go:204]   - Booting up control plane ...
	I1225 13:31:51.727248 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 13:31:51.727343 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 13:31:51.727431 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 13:31:51.745500 1484104 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 13:31:51.746331 1484104 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 13:31:51.746392 1484104 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1225 13:31:51.897052 1484104 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1225 13:32:00.401261 1484104 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504339 seconds
	I1225 13:32:00.401463 1484104 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 13:32:00.422010 1484104 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 13:32:00.962174 1484104 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 13:32:00.962418 1484104 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-344803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 13:32:01.479956 1484104 kubeadm.go:322] [bootstrap-token] Using token: 7n7qlp.3wejtqrgqunjtf8y
	I1225 13:32:01.481699 1484104 out.go:204]   - Configuring RBAC rules ...
	I1225 13:32:01.481862 1484104 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 13:32:01.489709 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 13:32:01.499287 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 13:32:01.504520 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 13:32:01.508950 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 13:32:01.517277 1484104 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 13:32:01.537420 1484104 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 13:32:01.820439 1484104 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1225 13:32:01.897010 1484104 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1225 13:32:01.897039 1484104 kubeadm.go:322] 
	I1225 13:32:01.897139 1484104 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1225 13:32:01.897169 1484104 kubeadm.go:322] 
	I1225 13:32:01.897259 1484104 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1225 13:32:01.897270 1484104 kubeadm.go:322] 
	I1225 13:32:01.897292 1484104 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1225 13:32:01.897383 1484104 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 13:32:01.897471 1484104 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 13:32:01.897484 1484104 kubeadm.go:322] 
	I1225 13:32:01.897558 1484104 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1225 13:32:01.897568 1484104 kubeadm.go:322] 
	I1225 13:32:01.897621 1484104 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 13:32:01.897629 1484104 kubeadm.go:322] 
	I1225 13:32:01.897702 1484104 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1225 13:32:01.897822 1484104 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 13:32:01.897923 1484104 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 13:32:01.897935 1484104 kubeadm.go:322] 
	I1225 13:32:01.898040 1484104 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 13:32:01.898141 1484104 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1225 13:32:01.898156 1484104 kubeadm.go:322] 
	I1225 13:32:01.898264 1484104 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 7n7qlp.3wejtqrgqunjtf8y \
	I1225 13:32:01.898455 1484104 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 \
	I1225 13:32:01.898506 1484104 kubeadm.go:322] 	--control-plane 
	I1225 13:32:01.898516 1484104 kubeadm.go:322] 
	I1225 13:32:01.898627 1484104 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1225 13:32:01.898645 1484104 kubeadm.go:322] 
	I1225 13:32:01.898760 1484104 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 7n7qlp.3wejtqrgqunjtf8y \
	I1225 13:32:01.898898 1484104 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 13:32:01.899552 1484104 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 13:32:01.899699 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:32:01.899720 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:32:01.902817 1484104 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:32:01.904375 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:32:01.943752 1484104 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:32:02.004751 1484104 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:32:02.004915 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da minikube.k8s.io/name=default-k8s-diff-port-344803 minikube.k8s.io/updated_at=2023_12_25T13_32_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.004920 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.377800 1484104 ops.go:34] apiserver oom_adj: -16
	I1225 13:32:02.378388 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.879083 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:03.379453 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:03.878676 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:04.378589 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:04.878630 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:05.378615 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:05.879009 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:06.379100 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:06.878610 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:07.378604 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:07.878597 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:08.379427 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:08.878637 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:09.378638 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:09.879200 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:10.378659 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:10.879285 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:11.378603 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:11.878605 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:12.379451 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:12.879431 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:13.379034 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:13.878468 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:14.378592 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:14.878569 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:15.008581 1484104 kubeadm.go:1088] duration metric: took 13.00372954s to wait for elevateKubeSystemPrivileges.
	I1225 13:32:15.008626 1484104 kubeadm.go:406] StartCluster complete in 5m11.652335467s
	I1225 13:32:15.008653 1484104 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:32:15.008763 1484104 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:32:15.011655 1484104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:32:15.011982 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:32:15.012172 1484104 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:32:15.012258 1484104 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012285 1484104 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.012297 1484104 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:32:15.012311 1484104 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012347 1484104 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-344803"
	I1225 13:32:15.012363 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.012798 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.012800 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.012831 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.012833 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.012898 1484104 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012912 1484104 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.012919 1484104 addons.go:246] addon metrics-server should already be in state true
	I1225 13:32:15.012961 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.012972 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:32:15.013289 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.013318 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.032424 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I1225 13:32:15.032981 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44439
	I1225 13:32:15.033180 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1225 13:32:15.033455 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.033575 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.033623 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.034052 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034069 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034173 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034195 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034209 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034238 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034412 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034635 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034693 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034728 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.036190 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.036205 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.036228 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.036229 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.040383 1484104 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.040442 1484104 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:32:15.040473 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.040780 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.040820 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.055366 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
	I1225 13:32:15.055979 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.056596 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.056623 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.056646 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I1225 13:32:15.056646 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41689
	I1225 13:32:15.057067 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.057205 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.057218 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.057413 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.057741 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.057768 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.057958 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.058013 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.058122 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.058413 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.058776 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.058816 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.059142 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.059588 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.061854 1484104 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:32:15.060849 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.063569 1484104 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:32:15.063593 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:32:15.065174 1484104 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:32:15.063622 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.066654 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:32:15.066677 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:32:15.066700 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.071209 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.071244 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.071995 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.072039 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.072074 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.072089 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.072244 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.072319 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.072500 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.072558 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.072875 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.072941 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.073085 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.073138 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.077927 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38519
	I1225 13:32:15.078428 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.079241 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.079262 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.079775 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.079983 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.081656 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.082002 1484104 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:32:15.082024 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:32:15.082047 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.085367 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.085779 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.085805 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.086119 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.086390 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.086656 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.086875 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.262443 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:32:15.262470 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:32:15.270730 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 13:32:15.285178 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:32:15.302070 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:32:15.302097 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:32:15.303686 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:32:15.373021 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:32:15.373054 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:32:15.461862 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:32:15.518928 1484104 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-344803" context rescaled to 1 replicas
	I1225 13:32:15.518973 1484104 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:32:15.520858 1484104 out.go:177] * Verifying Kubernetes components...
	I1225 13:32:15.522326 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:32:16.993620 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.72284687s)
	I1225 13:32:16.993667 1484104 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1225 13:32:17.329206 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.025471574s)
	I1225 13:32:17.329305 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329321 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329352 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.044135646s)
	I1225 13:32:17.329411 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329430 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329697 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.329722 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.329737 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329747 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.329764 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329740 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.329805 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.329825 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329838 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.331647 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.331675 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.331706 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.331715 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.331734 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.331766 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.350031 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.350068 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.350458 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.350499 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.350516 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.582723 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.120815372s)
	I1225 13:32:17.582785 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.582798 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.582787 1484104 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.060422325s)
	I1225 13:32:17.582838 1484104 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-344803" to be "Ready" ...
	I1225 13:32:17.583145 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.583172 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.583179 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.583192 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.583201 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.583438 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.583461 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.583471 1484104 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-344803"
	I1225 13:32:17.585288 1484104 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:32:17.586537 1484104 addons.go:508] enable addons completed in 2.574365441s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:32:17.595130 1484104 node_ready.go:49] node "default-k8s-diff-port-344803" has status "Ready":"True"
	I1225 13:32:17.595165 1484104 node_ready.go:38] duration metric: took 12.307997ms waiting for node "default-k8s-diff-port-344803" to be "Ready" ...
	I1225 13:32:17.595181 1484104 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:32:17.613099 1484104 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:19.621252 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:20.621494 1484104 pod_ready.go:92] pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.621519 1484104 pod_ready.go:81] duration metric: took 3.008379569s waiting for pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.621528 1484104 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.630348 1484104 pod_ready.go:92] pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.630375 1484104 pod_ready.go:81] duration metric: took 8.841316ms waiting for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.630387 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.636928 1484104 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.636953 1484104 pod_ready.go:81] duration metric: took 6.558203ms waiting for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.636963 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.643335 1484104 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.643360 1484104 pod_ready.go:81] duration metric: took 6.390339ms waiting for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.643369 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpk9s" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.649496 1484104 pod_ready.go:92] pod "kube-proxy-fpk9s" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.649526 1484104 pod_ready.go:81] duration metric: took 6.150243ms waiting for pod "kube-proxy-fpk9s" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.649535 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:21.018065 1484104 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:21.018092 1484104 pod_ready.go:81] duration metric: took 368.549291ms waiting for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:21.018102 1484104 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:23.026953 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:25.525822 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:27.530780 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:30.033601 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:32.528694 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:34.529208 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:37.028717 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:39.526632 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:42.026868 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:44.028002 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:46.526534 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:48.529899 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:51.026062 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:53.525655 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:55.526096 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:58.026355 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:00.026674 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:02.029299 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:04.526609 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:06.526810 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:09.026498 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:11.026612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:13.029416 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:15.526242 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:18.026664 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:20.529125 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:23.026694 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:25.029350 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:27.527537 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:30.030562 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:32.526381 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:34.526801 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:37.027939 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:39.526249 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:41.526511 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:43.526783 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:45.527693 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:48.026703 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:50.027582 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:52.526290 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:55.027458 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:57.526559 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:59.526699 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:01.527938 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:03.529353 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:06.025942 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:08.027340 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:10.028087 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:12.525688 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:14.527122 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:16.529380 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:19.026128 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:21.026183 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:23.027208 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:25.526282 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:27.531847 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:30.030025 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:32.526291 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:34.526470 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:36.527179 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:39.026270 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:41.029609 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:43.528905 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:46.026666 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:48.528560 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:51.025864 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:53.027211 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:55.527359 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:58.025696 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:00.027368 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:02.027605 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:04.525836 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:06.526571 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:08.528550 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:11.026765 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:13.028215 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:15.525903 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:17.527102 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:20.026011 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:22.525873 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:24.528380 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:27.026402 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:29.527869 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:32.026671 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:34.026737 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:36.026836 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:38.526788 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:41.027387 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:43.526936 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:46.026316 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:48.026940 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:50.526565 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:53.025988 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:55.027146 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:57.527287 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:00.028971 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:02.526704 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:05.025995 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:07.026612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:09.027839 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:11.526845 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:13.527737 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:16.026967 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:18.028747 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:20.527437 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:21.027372 1484104 pod_ready.go:81] duration metric: took 4m0.009244403s waiting for pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace to be "Ready" ...
	E1225 13:36:21.027405 1484104 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:36:21.027418 1484104 pod_ready.go:38] duration metric: took 4m3.432224558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:36:21.027474 1484104 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:36:21.027560 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:21.027806 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:21.090421 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:21.090464 1484104 cri.go:89] found id: ""
	I1225 13:36:21.090474 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:21.090526 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.095523 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:21.095605 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:21.139092 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:21.139126 1484104 cri.go:89] found id: ""
	I1225 13:36:21.139136 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:21.139206 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.143957 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:21.144038 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:21.190905 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:21.190937 1484104 cri.go:89] found id: ""
	I1225 13:36:21.190948 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:21.191018 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.195814 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:21.195882 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:21.240274 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:21.240307 1484104 cri.go:89] found id: ""
	I1225 13:36:21.240317 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:21.240384 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.244831 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:21.244930 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:21.289367 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:21.289399 1484104 cri.go:89] found id: ""
	I1225 13:36:21.289410 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:21.289478 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.293796 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:21.293878 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:21.338757 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:21.338789 1484104 cri.go:89] found id: ""
	I1225 13:36:21.338808 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:21.338878 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.343145 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:21.343217 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:21.384898 1484104 cri.go:89] found id: ""
	I1225 13:36:21.384929 1484104 logs.go:284] 0 containers: []
	W1225 13:36:21.384936 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:21.384943 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:21.385006 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:21.436776 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:21.436809 1484104 cri.go:89] found id: ""
	I1225 13:36:21.436818 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:21.436871 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.442173 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:21.442210 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:21.886890 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:21.886944 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:21.971380 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:21.971568 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:21.992672 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:21.992724 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:22.015144 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:22.015198 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:22.195011 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:22.195060 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:22.237377 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:22.237423 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:22.284207 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:22.284240 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:22.343882 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:22.343939 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:22.404320 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:22.404356 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:22.465126 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:22.465175 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:22.521920 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:22.521963 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:22.575563 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:22.575601 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:22.627508 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:22.627549 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:22.627808 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:22.627849 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:22.627862 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:22.627871 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:22.627882 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:32.629903 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:36:32.648435 1484104 api_server.go:72] duration metric: took 4m17.129427556s to wait for apiserver process to appear ...
	I1225 13:36:32.648461 1484104 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:36:32.648499 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:32.648567 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:32.705637 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:32.705673 1484104 cri.go:89] found id: ""
	I1225 13:36:32.705685 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:32.705754 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.710516 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:32.710591 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:32.757193 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:32.757225 1484104 cri.go:89] found id: ""
	I1225 13:36:32.757236 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:32.757302 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.762255 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:32.762335 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:32.812666 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:32.812692 1484104 cri.go:89] found id: ""
	I1225 13:36:32.812703 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:32.812758 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.817599 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:32.817676 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:32.861969 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:32.862011 1484104 cri.go:89] found id: ""
	I1225 13:36:32.862021 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:32.862084 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.868439 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:32.868525 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:32.929969 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:32.930006 1484104 cri.go:89] found id: ""
	I1225 13:36:32.930015 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:32.930077 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.936071 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:32.936149 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:32.980256 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:32.980280 1484104 cri.go:89] found id: ""
	I1225 13:36:32.980288 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:32.980345 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.985508 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:32.985605 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:33.029393 1484104 cri.go:89] found id: ""
	I1225 13:36:33.029429 1484104 logs.go:284] 0 containers: []
	W1225 13:36:33.029440 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:33.029448 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:33.029521 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:33.075129 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:33.075156 1484104 cri.go:89] found id: ""
	I1225 13:36:33.075167 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:33.075229 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:33.079900 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:33.079940 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:33.121355 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:33.121391 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:33.205175 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:33.205394 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:33.225359 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:33.225393 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:33.282658 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:33.282710 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:33.334586 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:33.334627 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:33.383538 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:33.383576 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:33.438245 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:33.438284 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:33.487260 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:33.487305 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:33.504627 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:33.504665 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:33.641875 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:33.641912 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:33.692275 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:33.692311 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:33.731932 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:33.731971 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:34.081286 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:34.081325 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:34.081438 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:34.081456 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:34.081465 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:34.081477 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:34.081490 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:44.083633 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:36:44.091721 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 200:
	ok
	I1225 13:36:44.093215 1484104 api_server.go:141] control plane version: v1.28.4
	I1225 13:36:44.093242 1484104 api_server.go:131] duration metric: took 11.444775391s to wait for apiserver health ...
	I1225 13:36:44.093251 1484104 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:36:44.093279 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:44.093330 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:44.135179 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:44.135212 1484104 cri.go:89] found id: ""
	I1225 13:36:44.135229 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:44.135308 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.140367 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:44.140455 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:44.179525 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:44.179557 1484104 cri.go:89] found id: ""
	I1225 13:36:44.179568 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:44.179644 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.184724 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:44.184822 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:44.225306 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:44.225339 1484104 cri.go:89] found id: ""
	I1225 13:36:44.225351 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:44.225418 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.230354 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:44.230459 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:44.272270 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:44.272300 1484104 cri.go:89] found id: ""
	I1225 13:36:44.272311 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:44.272387 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.277110 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:44.277187 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:44.326495 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:44.326519 1484104 cri.go:89] found id: ""
	I1225 13:36:44.326527 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:44.326579 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.333707 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:44.333799 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:44.380378 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:44.380410 1484104 cri.go:89] found id: ""
	I1225 13:36:44.380423 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:44.380488 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.390075 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:44.390171 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:44.440171 1484104 cri.go:89] found id: ""
	I1225 13:36:44.440211 1484104 logs.go:284] 0 containers: []
	W1225 13:36:44.440223 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:44.440233 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:44.440321 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:44.482074 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:44.482104 1484104 cri.go:89] found id: ""
	I1225 13:36:44.482114 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:44.482178 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.487171 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:44.487209 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:44.532144 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:44.532179 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:44.891521 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:44.891568 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:44.938934 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:44.938967 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:45.017433 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:45.017627 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:45.039058 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:45.039097 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:45.054560 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:45.054592 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:45.113698 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:45.113735 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:45.158302 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:45.158342 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:45.204784 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:45.204824 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:45.276442 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:45.276483 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:45.320645 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:45.320678 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:45.452638 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:45.452681 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:45.500718 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:45.500757 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:45.500817 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:45.500833 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:45.500844 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:45.500853 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:45.500859 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:55.510930 1484104 system_pods.go:59] 8 kube-system pods found
	I1225 13:36:55.510962 1484104 system_pods.go:61] "coredns-5dd5756b68-rbmbs" [cd5fc3c3-b9db-437d-8088-2f97921bc3bd] Running
	I1225 13:36:55.510968 1484104 system_pods.go:61] "etcd-default-k8s-diff-port-344803" [3824f946-c4e1-4e9c-a52f-3d6753ce9350] Running
	I1225 13:36:55.510973 1484104 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-344803" [81cf9f5a-6cc3-4d66-956f-6b8a4e2a1ef5] Running
	I1225 13:36:55.510977 1484104 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-344803" [b3cfc8b9-d03b-4a1e-9500-08bb08dc64f3] Running
	I1225 13:36:55.510984 1484104 system_pods.go:61] "kube-proxy-fpk9s" [17d80ffc-e149-4449-aec9-9d90a2fda282] Running
	I1225 13:36:55.510987 1484104 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-344803" [795b56ad-2ee1-45ef-8c7b-1b878be6b0d7] Running
	I1225 13:36:55.510995 1484104 system_pods.go:61] "metrics-server-57f55c9bc5-slv7p" [a51c534d-e6d8-48b9-852f-caf598c8853a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:36:55.510999 1484104 system_pods.go:61] "storage-provisioner" [4bee5e6e-1252-4b3d-8d6c-73515d8567e4] Running
	I1225 13:36:55.511014 1484104 system_pods.go:74] duration metric: took 11.417757674s to wait for pod list to return data ...
	I1225 13:36:55.511025 1484104 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:36:55.514087 1484104 default_sa.go:45] found service account: "default"
	I1225 13:36:55.514112 1484104 default_sa.go:55] duration metric: took 3.081452ms for default service account to be created ...
	I1225 13:36:55.514120 1484104 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:36:55.521321 1484104 system_pods.go:86] 8 kube-system pods found
	I1225 13:36:55.521355 1484104 system_pods.go:89] "coredns-5dd5756b68-rbmbs" [cd5fc3c3-b9db-437d-8088-2f97921bc3bd] Running
	I1225 13:36:55.521365 1484104 system_pods.go:89] "etcd-default-k8s-diff-port-344803" [3824f946-c4e1-4e9c-a52f-3d6753ce9350] Running
	I1225 13:36:55.521370 1484104 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-344803" [81cf9f5a-6cc3-4d66-956f-6b8a4e2a1ef5] Running
	I1225 13:36:55.521375 1484104 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-344803" [b3cfc8b9-d03b-4a1e-9500-08bb08dc64f3] Running
	I1225 13:36:55.521380 1484104 system_pods.go:89] "kube-proxy-fpk9s" [17d80ffc-e149-4449-aec9-9d90a2fda282] Running
	I1225 13:36:55.521387 1484104 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-344803" [795b56ad-2ee1-45ef-8c7b-1b878be6b0d7] Running
	I1225 13:36:55.521397 1484104 system_pods.go:89] "metrics-server-57f55c9bc5-slv7p" [a51c534d-e6d8-48b9-852f-caf598c8853a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:36:55.521409 1484104 system_pods.go:89] "storage-provisioner" [4bee5e6e-1252-4b3d-8d6c-73515d8567e4] Running
	I1225 13:36:55.521421 1484104 system_pods.go:126] duration metric: took 7.294824ms to wait for k8s-apps to be running ...
	I1225 13:36:55.521433 1484104 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:36:55.521492 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:36:55.540217 1484104 system_svc.go:56] duration metric: took 18.766893ms WaitForService to wait for kubelet.
	I1225 13:36:55.540248 1484104 kubeadm.go:581] duration metric: took 4m40.021246946s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:36:55.540271 1484104 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:36:55.544519 1484104 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:36:55.544685 1484104 node_conditions.go:123] node cpu capacity is 2
	I1225 13:36:55.544742 1484104 node_conditions.go:105] duration metric: took 4.463666ms to run NodePressure ...
	I1225 13:36:55.544783 1484104 start.go:228] waiting for startup goroutines ...
	I1225 13:36:55.544795 1484104 start.go:233] waiting for cluster config update ...
	I1225 13:36:55.544810 1484104 start.go:242] writing updated cluster config ...
	I1225 13:36:55.545268 1484104 ssh_runner.go:195] Run: rm -f paused
	I1225 13:36:55.607984 1484104 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 13:36:55.609993 1484104 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-344803" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2023-12-25 13:26:02 UTC, ends at Mon 2023-12-25 13:40:18 UTC. --
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.629384674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511618629367270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=8f31505f-12ac-48f5-bf4c-10bcba1510c1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.630040103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3ab70bed-e748-4738-8bfe-3ec2537046c4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.630231457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3ab70bed-e748-4738-8bfe-3ec2537046c4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.630544429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1703510843608920564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278192681968ebd4f81401794a1cc5b5dd6426a2821b042d8134565cbaad3cf,PodSandboxId:36aa4226da02008ebca03522998c66e6de98e2f2c033a537c5e8fc50c7b7947b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510824465737928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a84e545-a50b-403e-9963-1bf5157d9cde,},Annotations:map[string]string{io.kubernetes.container.hash: 327b23dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e,PodSandboxId:e3a8c0fdae79e7d9aac50a0a5141ed9fbfe48162215f99ba923bc9cf87b5ee86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1703510820300318277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pwk9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5856ad8d-6c49-4225-8890-4c912f839ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d20ac5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1703510812320780628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36,PodSandboxId:31bea21ee639089889d0178bf5552ae9f6f277315e241c4a640dde9c0d057d23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1703510812184337156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jbch6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af021a36-09e9
-4fba-8f23-cef46ed82aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f342d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83,PodSandboxId:980debbc80268076bced2c3d030319f03e82306b452282a7509d724f97682999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1703510805969845396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338605dc598a7e4187ea
3f5ef90f134a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0,PodSandboxId:8de4520c023254ceeb2f3c720719f73abe70d3f19c319c8855b525935184a742,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1703510805715416130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d3fc53b5b8bfda921184dee5cf991d,},Annotations:map[string]string{io.kub
ernetes.container.hash: fb57994f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f,PodSandboxId:ceecef539d3f7f9fa7f3cecf79744dac1df8fb7e08a4c82556684f26b8450722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1703510805564580424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2090eb0d558161c49f513eee6a2720,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: a3994894,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4,PodSandboxId:1491fefd67203be34cddf7275e1eee163b25571536431e5f89ec910d813eeddc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1703510805315974884,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f99e8d8aa6fd7d543933d989a9b8670,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3ab70bed-e748-4738-8bfe-3ec2537046c4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.675093114Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ccf22e12-f7b4-4704-9690-0070577cf707 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.675233641Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ccf22e12-f7b4-4704-9690-0070577cf707 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.678428903Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1df3a077-2104-4eab-a497-cdbdd77bef40 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.678770776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511618678748044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=1df3a077-2104-4eab-a497-cdbdd77bef40 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.682052322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=70b90e22-4495-4a75-93df-dac9c5075e6d name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.682200481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=70b90e22-4495-4a75-93df-dac9c5075e6d name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.682680187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1703510843608920564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278192681968ebd4f81401794a1cc5b5dd6426a2821b042d8134565cbaad3cf,PodSandboxId:36aa4226da02008ebca03522998c66e6de98e2f2c033a537c5e8fc50c7b7947b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510824465737928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a84e545-a50b-403e-9963-1bf5157d9cde,},Annotations:map[string]string{io.kubernetes.container.hash: 327b23dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e,PodSandboxId:e3a8c0fdae79e7d9aac50a0a5141ed9fbfe48162215f99ba923bc9cf87b5ee86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1703510820300318277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pwk9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5856ad8d-6c49-4225-8890-4c912f839ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d20ac5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1703510812320780628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36,PodSandboxId:31bea21ee639089889d0178bf5552ae9f6f277315e241c4a640dde9c0d057d23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1703510812184337156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jbch6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af021a36-09e9
-4fba-8f23-cef46ed82aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f342d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83,PodSandboxId:980debbc80268076bced2c3d030319f03e82306b452282a7509d724f97682999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1703510805969845396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338605dc598a7e4187ea
3f5ef90f134a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0,PodSandboxId:8de4520c023254ceeb2f3c720719f73abe70d3f19c319c8855b525935184a742,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1703510805715416130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d3fc53b5b8bfda921184dee5cf991d,},Annotations:map[string]string{io.kub
ernetes.container.hash: fb57994f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f,PodSandboxId:ceecef539d3f7f9fa7f3cecf79744dac1df8fb7e08a4c82556684f26b8450722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1703510805564580424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2090eb0d558161c49f513eee6a2720,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: a3994894,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4,PodSandboxId:1491fefd67203be34cddf7275e1eee163b25571536431e5f89ec910d813eeddc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1703510805315974884,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f99e8d8aa6fd7d543933d989a9b8670,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=70b90e22-4495-4a75-93df-dac9c5075e6d name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.728749975Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=54ec3685-9974-4120-a00b-9fe6fd8f7864 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.728845725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=54ec3685-9974-4120-a00b-9fe6fd8f7864 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.730362060Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7ba22d80-c2f1-434d-b394-6675d5bbed41 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.730709178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511618730695148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=7ba22d80-c2f1-434d-b394-6675d5bbed41 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.731543111Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=96eb1722-62f8-47c8-a0b9-c77d0591cf90 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.731639261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=96eb1722-62f8-47c8-a0b9-c77d0591cf90 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.731873339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1703510843608920564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278192681968ebd4f81401794a1cc5b5dd6426a2821b042d8134565cbaad3cf,PodSandboxId:36aa4226da02008ebca03522998c66e6de98e2f2c033a537c5e8fc50c7b7947b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510824465737928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a84e545-a50b-403e-9963-1bf5157d9cde,},Annotations:map[string]string{io.kubernetes.container.hash: 327b23dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e,PodSandboxId:e3a8c0fdae79e7d9aac50a0a5141ed9fbfe48162215f99ba923bc9cf87b5ee86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1703510820300318277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pwk9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5856ad8d-6c49-4225-8890-4c912f839ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d20ac5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1703510812320780628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36,PodSandboxId:31bea21ee639089889d0178bf5552ae9f6f277315e241c4a640dde9c0d057d23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1703510812184337156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jbch6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af021a36-09e9
-4fba-8f23-cef46ed82aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f342d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83,PodSandboxId:980debbc80268076bced2c3d030319f03e82306b452282a7509d724f97682999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1703510805969845396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338605dc598a7e4187ea
3f5ef90f134a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0,PodSandboxId:8de4520c023254ceeb2f3c720719f73abe70d3f19c319c8855b525935184a742,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1703510805715416130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d3fc53b5b8bfda921184dee5cf991d,},Annotations:map[string]string{io.kub
ernetes.container.hash: fb57994f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f,PodSandboxId:ceecef539d3f7f9fa7f3cecf79744dac1df8fb7e08a4c82556684f26b8450722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1703510805564580424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2090eb0d558161c49f513eee6a2720,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: a3994894,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4,PodSandboxId:1491fefd67203be34cddf7275e1eee163b25571536431e5f89ec910d813eeddc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1703510805315974884,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f99e8d8aa6fd7d543933d989a9b8670,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=96eb1722-62f8-47c8-a0b9-c77d0591cf90 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.768275789Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d50ff106-c446-43aa-a968-dce24f7a9d21 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.768419623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d50ff106-c446-43aa-a968-dce24f7a9d21 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.769616412Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ae19f3bd-e18d-4ea7-8895-a879c176069d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.769930321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511618769916259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=ae19f3bd-e18d-4ea7-8895-a879c176069d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.770821535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c22a33f8-a2de-43b6-a050-00388620a24f name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.770895113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c22a33f8-a2de-43b6-a050-00388620a24f name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:18 no-preload-330063 crio[717]: time="2023-12-25 13:40:18.771093159Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1703510843608920564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278192681968ebd4f81401794a1cc5b5dd6426a2821b042d8134565cbaad3cf,PodSandboxId:36aa4226da02008ebca03522998c66e6de98e2f2c033a537c5e8fc50c7b7947b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510824465737928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a84e545-a50b-403e-9963-1bf5157d9cde,},Annotations:map[string]string{io.kubernetes.container.hash: 327b23dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e,PodSandboxId:e3a8c0fdae79e7d9aac50a0a5141ed9fbfe48162215f99ba923bc9cf87b5ee86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1703510820300318277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pwk9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5856ad8d-6c49-4225-8890-4c912f839ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d20ac5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1703510812320780628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36,PodSandboxId:31bea21ee639089889d0178bf5552ae9f6f277315e241c4a640dde9c0d057d23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1703510812184337156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jbch6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af021a36-09e9
-4fba-8f23-cef46ed82aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f342d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83,PodSandboxId:980debbc80268076bced2c3d030319f03e82306b452282a7509d724f97682999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1703510805969845396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338605dc598a7e4187ea
3f5ef90f134a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0,PodSandboxId:8de4520c023254ceeb2f3c720719f73abe70d3f19c319c8855b525935184a742,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1703510805715416130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d3fc53b5b8bfda921184dee5cf991d,},Annotations:map[string]string{io.kub
ernetes.container.hash: fb57994f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f,PodSandboxId:ceecef539d3f7f9fa7f3cecf79744dac1df8fb7e08a4c82556684f26b8450722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1703510805564580424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2090eb0d558161c49f513eee6a2720,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: a3994894,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4,PodSandboxId:1491fefd67203be34cddf7275e1eee163b25571536431e5f89ec910d813eeddc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1703510805315974884,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f99e8d8aa6fd7d543933d989a9b8670,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c22a33f8-a2de-43b6-a050-00388620a24f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f22e0dc3ae98f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   c74d378a7ce6d       storage-provisioner
	e278192681968       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   36aa4226da020       busybox
	7ed64b4585957       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   e3a8c0fdae79e       coredns-76f75df574-pwk9h
	41d1cc3530c54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   c74d378a7ce6d       storage-provisioner
	b9051ad32027d       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      13 minutes ago      Running             kube-proxy                1                   31bea21ee6390       kube-proxy-jbch6
	3562a602302de       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      13 minutes ago      Running             kube-scheduler            1                   980debbc80268       kube-scheduler-no-preload-330063
	6d72676ee211f       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   8de4520c02325       etcd-no-preload-330063
	ccc0750bcacd5       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      13 minutes ago      Running             kube-apiserver            1                   ceecef539d3f7       kube-apiserver-no-preload-330063
	ddc7a61af803e       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      13 minutes ago      Running             kube-controller-manager   1                   1491fefd67203       kube-controller-manager-no-preload-330063
	
	
	==> coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40030 - 9349 "HINFO IN 7359491548542591292.800707443245296279. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009390068s
	
	
	==> describe nodes <==
	Name:               no-preload-330063
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-330063
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=no-preload-330063
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_25T13_19_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 13:18:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-330063
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Dec 2023 13:40:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 13:37:33 +0000   Mon, 25 Dec 2023 13:18:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 13:37:33 +0000   Mon, 25 Dec 2023 13:18:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 13:37:33 +0000   Mon, 25 Dec 2023 13:18:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 13:37:33 +0000   Mon, 25 Dec 2023 13:27:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.232
	  Hostname:    no-preload-330063
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 406372a65c9a43bf87e8eb26880385d4
	  System UUID:                406372a6-5c9a-43bf-87e8-eb26880385d4
	  Boot ID:                    23814a5a-2071-47fa-b212-ea86c8e3f921
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-76f75df574-pwk9h                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-no-preload-330063                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-no-preload-330063             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-no-preload-330063    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-jbch6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-no-preload-330063             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-q97kl              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node no-preload-330063 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node no-preload-330063 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node no-preload-330063 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-330063 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-330063 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-330063 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node no-preload-330063 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-330063 event: Registered Node no-preload-330063 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-330063 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-330063 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-330063 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-330063 event: Registered Node no-preload-330063 in Controller
	
	
	==> dmesg <==
	[Dec25 13:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072628] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.414624] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec25 13:26] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149826] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.433258] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.371786] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.112521] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.175395] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.132801] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.256668] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[ +29.080641] systemd-fstab-generator[1334]: Ignoring "noauto" for root device
	[ +15.448146] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] <==
	{"level":"info","ts":"2023-12-25T13:26:49.425867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.232:2379"}
	{"level":"info","ts":"2023-12-25T13:27:02.467996Z","caller":"traceutil/trace.go:171","msg":"trace[1172228625] linearizableReadLoop","detail":"{readStateIndex:612; appliedIndex:612; }","duration":"277.549752ms","start":"2023-12-25T13:27:02.190316Z","end":"2023-12-25T13:27:02.467866Z","steps":["trace[1172228625] 'read index received'  (duration: 277.542331ms)","trace[1172228625] 'applied index is now lower than readState.Index'  (duration: 6.109µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-25T13:27:02.467917Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:02.148104Z","time spent":"319.807859ms","remote":"127.0.0.1:45976","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2023-12-25T13:27:02.468452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.223417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-330063\" ","response":"range_response_count:1 size:5609"}
	{"level":"info","ts":"2023-12-25T13:27:02.468661Z","caller":"traceutil/trace.go:171","msg":"trace[1715939902] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-330063; range_end:; response_count:1; response_revision:578; }","duration":"278.461987ms","start":"2023-12-25T13:27:02.190184Z","end":"2023-12-25T13:27:02.468646Z","steps":["trace[1715939902] 'agreement among raft nodes before linearized reading'  (duration: 278.180795ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:02.742659Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.226868ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3021225548825485374 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.232\" mod_revision:540 > success:<request_put:<key:\"/registry/masterleases/192.168.72.232\" value_size:67 lease:3021225548825485371 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.232\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-25T13:27:02.742779Z","caller":"traceutil/trace.go:171","msg":"trace[2063143104] linearizableReadLoop","detail":"{readStateIndex:613; appliedIndex:612; }","duration":"270.394668ms","start":"2023-12-25T13:27:02.472375Z","end":"2023-12-25T13:27:02.742769Z","steps":["trace[2063143104] 'read index received'  (duration: 124.451513ms)","trace[2063143104] 'applied index is now lower than readState.Index'  (duration: 145.941516ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-25T13:27:02.74288Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.521012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-330063\" ","response":"range_response_count:1 size:4427"}
	{"level":"info","ts":"2023-12-25T13:27:02.742959Z","caller":"traceutil/trace.go:171","msg":"trace[991098436] range","detail":"{range_begin:/registry/minions/no-preload-330063; range_end:; response_count:1; response_revision:579; }","duration":"270.590198ms","start":"2023-12-25T13:27:02.472345Z","end":"2023-12-25T13:27:02.742935Z","steps":["trace[991098436] 'agreement among raft nodes before linearized reading'  (duration: 270.453887ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T13:27:02.74299Z","caller":"traceutil/trace.go:171","msg":"trace[237342704] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"271.96121ms","start":"2023-12-25T13:27:02.470947Z","end":"2023-12-25T13:27:02.742908Z","steps":["trace[237342704] 'process raft request'  (duration: 126.058332ms)","trace[237342704] 'compare'  (duration: 144.980866ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-25T13:27:03.963665Z","caller":"traceutil/trace.go:171","msg":"trace[2031875018] linearizableReadLoop","detail":"{readStateIndex:614; appliedIndex:613; }","duration":"183.26115ms","start":"2023-12-25T13:27:03.780387Z","end":"2023-12-25T13:27:03.963648Z","steps":["trace[2031875018] 'read index received'  (duration: 183.090523ms)","trace[2031875018] 'applied index is now lower than readState.Index'  (duration: 169.775µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-25T13:27:03.963962Z","caller":"traceutil/trace.go:171","msg":"trace[875942442] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"227.195348ms","start":"2023-12-25T13:27:03.736754Z","end":"2023-12-25T13:27:03.963949Z","steps":["trace[875942442] 'process raft request'  (duration: 226.786694ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:03.964339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.958264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2023-12-25T13:27:03.964437Z","caller":"traceutil/trace.go:171","msg":"trace[1119136746] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:580; }","duration":"183.998882ms","start":"2023-12-25T13:27:03.780362Z","end":"2023-12-25T13:27:03.964361Z","steps":["trace[1119136746] 'agreement among raft nodes before linearized reading'  (duration: 183.919723ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T13:27:04.360552Z","caller":"traceutil/trace.go:171","msg":"trace[1645593883] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:614; }","duration":"378.566202ms","start":"2023-12-25T13:27:03.981972Z","end":"2023-12-25T13:27:04.360538Z","steps":["trace[1645593883] 'read index received'  (duration: 372.899014ms)","trace[1645593883] 'applied index is now lower than readState.Index'  (duration: 5.666442ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-25T13:27:04.360982Z","caller":"traceutil/trace.go:171","msg":"trace[59349680] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"382.91078ms","start":"2023-12-25T13:27:03.978058Z","end":"2023-12-25T13:27:04.360969Z","steps":["trace[59349680] 'process raft request'  (duration: 377.023041ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:04.361269Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.978039Z","time spent":"383.066941ms","remote":"127.0.0.1:46010","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5422,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-no-preload-330063\" mod_revision:580 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-no-preload-330063\" value_size:5365 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-no-preload-330063\" > >"}
	{"level":"warn","ts":"2023-12-25T13:27:04.361471Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"379.535237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2023-12-25T13:27:04.361529Z","caller":"traceutil/trace.go:171","msg":"trace[253838963] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:581; }","duration":"379.639863ms","start":"2023-12-25T13:27:03.981879Z","end":"2023-12-25T13:27:04.361519Z","steps":["trace[253838963] 'agreement among raft nodes before linearized reading'  (duration: 379.558834ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:04.361589Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.981858Z","time spent":"379.719802ms","remote":"127.0.0.1:46014","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":217,"request content":"key:\"/registry/serviceaccounts/kube-system/node-controller\" "}
	{"level":"warn","ts":"2023-12-25T13:27:04.361749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.023112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-330063\" ","response":"range_response_count:1 size:5437"}
	{"level":"info","ts":"2023-12-25T13:27:04.361798Z","caller":"traceutil/trace.go:171","msg":"trace[669444436] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-330063; range_end:; response_count:1; response_revision:581; }","duration":"172.070726ms","start":"2023-12-25T13:27:04.189719Z","end":"2023-12-25T13:27:04.36179Z","steps":["trace[669444436] 'agreement among raft nodes before linearized reading'  (duration: 172.005959ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T13:36:49.470484Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":836}
	{"level":"info","ts":"2023-12-25T13:36:49.474631Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":836,"took":"3.577731ms","hash":902266057}
	{"level":"info","ts":"2023-12-25T13:36:49.474764Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":902266057,"revision":836,"compact-revision":-1}
	
	
	==> kernel <==
	 13:40:19 up 14 min,  0 users,  load average: 0.22, 0.19, 0.16
	Linux no-preload-330063 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] <==
	I1225 13:34:51.972361       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:36:50.972939       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:36:50.973351       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1225 13:36:51.973579       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:36:51.973675       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:36:51.973721       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:36:51.973811       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:36:51.974078       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:36:51.975408       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:37:51.973970       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:37:51.974039       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:37:51.974048       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:37:51.976384       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:37:51.976545       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:37:51.976579       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:39:51.975264       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:39:51.975357       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:39:51.975367       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:39:51.977585       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:39:51.977760       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:39:51.977798       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] <==
	I1225 13:34:35.251998       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:35:04.768281       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:35:05.262878       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:35:34.774813       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:35:35.272855       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:36:04.787314       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:36:05.292496       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:36:34.793281       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:36:35.301859       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:37:04.803035       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:37:05.317328       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:37:34.810381       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:37:35.328311       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1225 13:37:41.333896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="256.632µs"
	I1225 13:37:54.339492       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="179.576µs"
	E1225 13:38:04.816939       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:38:05.342368       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:38:34.822678       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:38:35.351631       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:39:04.828047       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:39:05.360639       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:39:34.834608       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:39:35.369386       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:40:04.840529       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:40:05.390854       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] <==
	I1225 13:26:52.668728       1 server_others.go:72] "Using iptables proxy"
	I1225 13:26:52.684970       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.232"]
	I1225 13:26:52.729430       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1225 13:26:52.729478       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1225 13:26:52.729494       1 server_others.go:168] "Using iptables Proxier"
	I1225 13:26:52.732746       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1225 13:26:52.733187       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I1225 13:26:52.733225       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 13:26:52.734047       1 config.go:188] "Starting service config controller"
	I1225 13:26:52.734101       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1225 13:26:52.734192       1 config.go:97] "Starting endpoint slice config controller"
	I1225 13:26:52.734199       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1225 13:26:52.736796       1 config.go:315] "Starting node config controller"
	I1225 13:26:52.736882       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1225 13:26:52.834978       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1225 13:26:52.835057       1 shared_informer.go:318] Caches are synced for service config
	I1225 13:26:52.837903       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] <==
	I1225 13:26:48.332820       1 serving.go:380] Generated self-signed cert in-memory
	W1225 13:26:50.947694       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 13:26:50.947815       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 13:26:50.947902       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 13:26:50.947908       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 13:26:51.001217       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I1225 13:26:51.001297       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 13:26:51.002713       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 13:26:51.002815       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1225 13:26:51.006617       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1225 13:26:51.008307       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1225 13:26:51.103745       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2023-12-25 13:26:02 UTC, ends at Mon 2023-12-25 13:40:19 UTC. --
	Dec 25 13:37:30 no-preload-330063 kubelet[1340]: E1225 13:37:30.332001    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:37:41 no-preload-330063 kubelet[1340]: E1225 13:37:41.314214    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:37:44 no-preload-330063 kubelet[1340]: E1225 13:37:44.326844    1340 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:37:44 no-preload-330063 kubelet[1340]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:37:44 no-preload-330063 kubelet[1340]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:37:44 no-preload-330063 kubelet[1340]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:37:54 no-preload-330063 kubelet[1340]: E1225 13:37:54.316768    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:38:08 no-preload-330063 kubelet[1340]: E1225 13:38:08.315472    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:38:20 no-preload-330063 kubelet[1340]: E1225 13:38:20.315685    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:38:32 no-preload-330063 kubelet[1340]: E1225 13:38:32.315283    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:38:44 no-preload-330063 kubelet[1340]: E1225 13:38:44.329102    1340 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:38:44 no-preload-330063 kubelet[1340]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:38:44 no-preload-330063 kubelet[1340]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:38:44 no-preload-330063 kubelet[1340]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:38:45 no-preload-330063 kubelet[1340]: E1225 13:38:45.313609    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:38:59 no-preload-330063 kubelet[1340]: E1225 13:38:59.313982    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:39:13 no-preload-330063 kubelet[1340]: E1225 13:39:13.313670    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:39:28 no-preload-330063 kubelet[1340]: E1225 13:39:28.314515    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:39:42 no-preload-330063 kubelet[1340]: E1225 13:39:42.316443    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:39:44 no-preload-330063 kubelet[1340]: E1225 13:39:44.328621    1340 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:39:44 no-preload-330063 kubelet[1340]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:39:44 no-preload-330063 kubelet[1340]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:39:44 no-preload-330063 kubelet[1340]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:39:54 no-preload-330063 kubelet[1340]: E1225 13:39:54.316068    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:40:09 no-preload-330063 kubelet[1340]: E1225 13:40:09.314298    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	
	
	==> storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] <==
	I1225 13:26:52.671292       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 13:27:22.679636       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] <==
	I1225 13:27:23.789654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 13:27:23.806721       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 13:27:23.806925       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 13:27:41.221597       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 13:27:41.221962       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-330063_c10e988d-6412-408b-b4d2-af4d7ed42296!
	I1225 13:27:41.226612       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"984abe25-ea8f-40ab-a01d-41b1db70758a", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-330063_c10e988d-6412-408b-b4d2-af4d7ed42296 became leader
	I1225 13:27:41.323956       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-330063_c10e988d-6412-408b-b4d2-af4d7ed42296!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-330063 -n no-preload-330063
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-330063 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-q97kl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-330063 describe pod metrics-server-57f55c9bc5-q97kl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-330063 describe pod metrics-server-57f55c9bc5-q97kl: exit status 1 (82.659335ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-q97kl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-330063 describe pod metrics-server-57f55c9bc5-q97kl: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1225 13:33:56.706836 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 13:34:07.348386 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 13:35:30.399528 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 13:36:26.363380 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880612 -n embed-certs-880612
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-25 13:40:28.591866019 +0000 UTC m=+5053.210343040
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880612 -n embed-certs-880612
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-880612 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-880612 logs -n 25: (1.779522241s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-435411                           | kubernetes-upgrade-435411    | jenkins | v1.32.0 | 25 Dec 23 13:17 UTC | 25 Dec 23 13:17 UTC |
	| start   | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:17 UTC | 25 Dec 23 13:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-021022                              | cert-expiration-021022       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC | 25 Dec 23 13:19 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-198979        | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC | 25 Dec 23 13:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-330063             | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-021022                              | cert-expiration-021022       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	| delete  | -p                                                     | disable-driver-mounts-246503 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	|         | disable-driver-mounts-246503                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:22 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-198979             | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-330063                  | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880612            | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-344803  | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880612                 | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-344803       | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC | 25 Dec 23 13:36 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 13:25:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 13:25:09.868120 1484104 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:25:09.868323 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:25:09.868335 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:25:09.868341 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:25:09.868532 1484104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:25:09.869122 1484104 out.go:303] Setting JSON to false
	I1225 13:25:09.870130 1484104 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":158863,"bootTime":1703351847,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 13:25:09.870205 1484104 start.go:138] virtualization: kvm guest
	I1225 13:25:09.872541 1484104 out.go:177] * [default-k8s-diff-port-344803] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 13:25:09.874217 1484104 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 13:25:09.874305 1484104 notify.go:220] Checking for updates...
	I1225 13:25:09.875839 1484104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 13:25:09.877587 1484104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:25:09.879065 1484104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:25:09.880503 1484104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 13:25:09.881819 1484104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 13:25:09.883607 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:25:09.884026 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:09.884110 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:09.899270 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I1225 13:25:09.899708 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:09.900286 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:25:09.900337 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:09.900687 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:09.900912 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:25:09.901190 1484104 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 13:25:09.901525 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:09.901579 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:09.916694 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39937
	I1225 13:25:09.917130 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:09.917673 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:25:09.917704 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:09.918085 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:09.918333 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:25:09.953536 1484104 out.go:177] * Using the kvm2 driver based on existing profile
	I1225 13:25:09.955050 1484104 start.go:298] selected driver: kvm2
	I1225 13:25:09.955065 1484104 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:25:09.955241 1484104 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 13:25:09.955956 1484104 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:25:09.956047 1484104 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 13:25:09.971769 1484104 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 13:25:09.972199 1484104 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 13:25:09.972296 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:25:09.972313 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:25:09.972334 1484104 start_flags.go:323] config:
	{Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-34480
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:25:09.972534 1484104 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:25:09.975411 1484104 out.go:177] * Starting control plane node default-k8s-diff-port-344803 in cluster default-k8s-diff-port-344803
	I1225 13:25:07.694690 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:09.976744 1484104 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:25:09.976814 1484104 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1225 13:25:09.976830 1484104 cache.go:56] Caching tarball of preloaded images
	I1225 13:25:09.976928 1484104 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 13:25:09.976941 1484104 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1225 13:25:09.977353 1484104 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/config.json ...
	I1225 13:25:09.977710 1484104 start.go:365] acquiring machines lock for default-k8s-diff-port-344803: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:25:10.766734 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:16.850681 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:19.922690 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:25.998796 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:29.070780 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:35.150661 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:38.222822 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:44.302734 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:50.379073 1483118 start.go:369] acquired machines lock for "no-preload-330063" in 3m45.211894916s
	I1225 13:25:50.379143 1483118 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:25:50.379155 1483118 fix.go:54] fixHost starting: 
	I1225 13:25:50.379692 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:50.379739 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:50.395491 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I1225 13:25:50.395953 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:50.396490 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:25:50.396512 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:50.396880 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:50.397080 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:25:50.397224 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:25:50.399083 1483118 fix.go:102] recreateIfNeeded on no-preload-330063: state=Stopped err=<nil>
	I1225 13:25:50.399110 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	W1225 13:25:50.399283 1483118 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:25:50.401483 1483118 out.go:177] * Restarting existing kvm2 VM for "no-preload-330063" ...
	I1225 13:25:47.374782 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:50.376505 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:25:50.376562 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:25:50.378895 1482618 machine.go:91] provisioned docker machine in 4m37.578359235s
	I1225 13:25:50.378958 1482618 fix.go:56] fixHost completed within 4m37.60680956s
	I1225 13:25:50.378968 1482618 start.go:83] releasing machines lock for "old-k8s-version-198979", held for 4m37.606859437s
	W1225 13:25:50.378992 1482618 start.go:694] error starting host: provision: host is not running
	W1225 13:25:50.379100 1482618 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1225 13:25:50.379111 1482618 start.go:709] Will try again in 5 seconds ...
	I1225 13:25:50.403280 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Start
	I1225 13:25:50.403507 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring networks are active...
	I1225 13:25:50.404422 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring network default is active
	I1225 13:25:50.404784 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring network mk-no-preload-330063 is active
	I1225 13:25:50.405087 1483118 main.go:141] libmachine: (no-preload-330063) Getting domain xml...
	I1225 13:25:50.405654 1483118 main.go:141] libmachine: (no-preload-330063) Creating domain...
	I1225 13:25:51.676192 1483118 main.go:141] libmachine: (no-preload-330063) Waiting to get IP...
	I1225 13:25:51.677110 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:51.677638 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:51.677715 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:51.677616 1484268 retry.go:31] will retry after 268.018359ms: waiting for machine to come up
	I1225 13:25:51.947683 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:51.948172 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:51.948198 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:51.948118 1484268 retry.go:31] will retry after 278.681465ms: waiting for machine to come up
	I1225 13:25:52.228745 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.229234 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.229265 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.229166 1484268 retry.go:31] will retry after 329.72609ms: waiting for machine to come up
	I1225 13:25:52.560878 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.561315 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.561348 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.561257 1484268 retry.go:31] will retry after 398.659264ms: waiting for machine to come up
	I1225 13:25:52.962067 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.962596 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.962620 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.962548 1484268 retry.go:31] will retry after 474.736894ms: waiting for machine to come up
	I1225 13:25:53.439369 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:53.439834 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:53.439856 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:53.439795 1484268 retry.go:31] will retry after 632.915199ms: waiting for machine to come up
	I1225 13:25:54.074832 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:54.075320 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:54.075349 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:54.075286 1484268 retry.go:31] will retry after 889.605242ms: waiting for machine to come up
	I1225 13:25:54.966323 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:54.966800 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:54.966822 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:54.966757 1484268 retry.go:31] will retry after 1.322678644s: waiting for machine to come up
	I1225 13:25:55.379741 1482618 start.go:365] acquiring machines lock for old-k8s-version-198979: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:25:56.291182 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:56.291604 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:56.291633 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:56.291567 1484268 retry.go:31] will retry after 1.717647471s: waiting for machine to come up
	I1225 13:25:58.011626 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:58.012081 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:58.012116 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:58.012018 1484268 retry.go:31] will retry after 2.29935858s: waiting for machine to come up
	I1225 13:26:00.314446 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:00.314833 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:00.314858 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:00.314806 1484268 retry.go:31] will retry after 2.50206405s: waiting for machine to come up
	I1225 13:26:02.819965 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:02.820458 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:02.820490 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:02.820403 1484268 retry.go:31] will retry after 2.332185519s: waiting for machine to come up
	I1225 13:26:05.155725 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:05.156228 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:05.156263 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:05.156153 1484268 retry.go:31] will retry after 2.769754662s: waiting for machine to come up
	I1225 13:26:07.929629 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:07.930087 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:07.930126 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:07.930040 1484268 retry.go:31] will retry after 4.407133766s: waiting for machine to come up
	I1225 13:26:13.687348 1483946 start.go:369] acquired machines lock for "embed-certs-880612" in 1m27.002513209s
	I1225 13:26:13.687426 1483946 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:13.687436 1483946 fix.go:54] fixHost starting: 
	I1225 13:26:13.687850 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:13.687916 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:13.706054 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I1225 13:26:13.706521 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:13.707063 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:26:13.707087 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:13.707472 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:13.707645 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:13.707832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:26:13.709643 1483946 fix.go:102] recreateIfNeeded on embed-certs-880612: state=Stopped err=<nil>
	I1225 13:26:13.709676 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	W1225 13:26:13.709868 1483946 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:13.712452 1483946 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880612" ...
	I1225 13:26:12.339674 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.340219 1483118 main.go:141] libmachine: (no-preload-330063) Found IP for machine: 192.168.72.232
	I1225 13:26:12.340243 1483118 main.go:141] libmachine: (no-preload-330063) Reserving static IP address...
	I1225 13:26:12.340263 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has current primary IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.340846 1483118 main.go:141] libmachine: (no-preload-330063) Reserved static IP address: 192.168.72.232
	I1225 13:26:12.340896 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "no-preload-330063", mac: "52:54:00:e9:c3:b6", ip: "192.168.72.232"} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.340912 1483118 main.go:141] libmachine: (no-preload-330063) Waiting for SSH to be available...
	I1225 13:26:12.340947 1483118 main.go:141] libmachine: (no-preload-330063) DBG | skip adding static IP to network mk-no-preload-330063 - found existing host DHCP lease matching {name: "no-preload-330063", mac: "52:54:00:e9:c3:b6", ip: "192.168.72.232"}
	I1225 13:26:12.340962 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Getting to WaitForSSH function...
	I1225 13:26:12.343164 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.343417 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.343448 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.343552 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Using SSH client type: external
	I1225 13:26:12.343566 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa (-rw-------)
	I1225 13:26:12.343587 1483118 main.go:141] libmachine: (no-preload-330063) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:12.343595 1483118 main.go:141] libmachine: (no-preload-330063) DBG | About to run SSH command:
	I1225 13:26:12.343603 1483118 main.go:141] libmachine: (no-preload-330063) DBG | exit 0
	I1225 13:26:12.434661 1483118 main.go:141] libmachine: (no-preload-330063) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:12.435101 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetConfigRaw
	I1225 13:26:12.435827 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:12.438300 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.438673 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.438705 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.438870 1483118 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/config.json ...
	I1225 13:26:12.439074 1483118 machine.go:88] provisioning docker machine ...
	I1225 13:26:12.439093 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:12.439335 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.439556 1483118 buildroot.go:166] provisioning hostname "no-preload-330063"
	I1225 13:26:12.439584 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.439789 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.442273 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.442630 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.442661 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.442768 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.442956 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.443127 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.443271 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.443410 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:12.443772 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:12.443787 1483118 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-330063 && echo "no-preload-330063" | sudo tee /etc/hostname
	I1225 13:26:12.581579 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-330063
	
	I1225 13:26:12.581609 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.584621 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.584949 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.584979 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.585252 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.585495 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.585656 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.585790 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.585947 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:12.586320 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:12.586346 1483118 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-330063' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-330063/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-330063' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:12.717139 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:12.717176 1483118 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:12.717197 1483118 buildroot.go:174] setting up certificates
	I1225 13:26:12.717212 1483118 provision.go:83] configureAuth start
	I1225 13:26:12.717229 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.717570 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:12.720469 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.720828 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.720859 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.721016 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.723432 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.723758 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.723815 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.723944 1483118 provision.go:138] copyHostCerts
	I1225 13:26:12.724021 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:12.724035 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:12.724102 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:12.724207 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:12.724215 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:12.724242 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:12.724323 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:12.724330 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:12.724351 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:12.724408 1483118 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.no-preload-330063 san=[192.168.72.232 192.168.72.232 localhost 127.0.0.1 minikube no-preload-330063]
	I1225 13:26:12.929593 1483118 provision.go:172] copyRemoteCerts
	I1225 13:26:12.929665 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:12.929699 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.932608 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.932934 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.932978 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.933144 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.933389 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.933581 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.933738 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.023574 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:13.047157 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1225 13:26:13.070779 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:26:13.094249 1483118 provision.go:86] duration metric: configureAuth took 377.018818ms
	I1225 13:26:13.094284 1483118 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:13.094538 1483118 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:26:13.094665 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.097705 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.098133 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.098179 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.098429 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.098708 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.098888 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.099029 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.099191 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:13.099516 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:13.099534 1483118 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:13.430084 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:13.430138 1483118 machine.go:91] provisioned docker machine in 991.050011ms
	I1225 13:26:13.430150 1483118 start.go:300] post-start starting for "no-preload-330063" (driver="kvm2")
	I1225 13:26:13.430162 1483118 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:13.430185 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.430616 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:13.430661 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.433623 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.434018 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.434054 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.434191 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.434413 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.434586 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.434700 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.523954 1483118 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:13.528009 1483118 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:13.528040 1483118 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:13.528118 1483118 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:13.528214 1483118 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:13.528359 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:13.536826 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:13.561011 1483118 start.go:303] post-start completed in 130.840608ms
	I1225 13:26:13.561046 1483118 fix.go:56] fixHost completed within 23.181891118s
	I1225 13:26:13.561078 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.563717 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.564040 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.564087 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.564268 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.564504 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.564702 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.564812 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.564965 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:13.565326 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:13.565340 1483118 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:13.687155 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510773.671808211
	
	I1225 13:26:13.687181 1483118 fix.go:206] guest clock: 1703510773.671808211
	I1225 13:26:13.687189 1483118 fix.go:219] Guest: 2023-12-25 13:26:13.671808211 +0000 UTC Remote: 2023-12-25 13:26:13.561052282 +0000 UTC m=+248.574935292 (delta=110.755929ms)
	I1225 13:26:13.687209 1483118 fix.go:190] guest clock delta is within tolerance: 110.755929ms
	I1225 13:26:13.687214 1483118 start.go:83] releasing machines lock for "no-preload-330063", held for 23.308100249s
	I1225 13:26:13.687243 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.687561 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:13.690172 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.690572 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.690604 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.690780 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691362 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691534 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691615 1483118 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:13.691670 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.691807 1483118 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:13.691842 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.694593 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.694871 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.694943 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.694967 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.695202 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.695293 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.695319 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.695452 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.695508 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.695613 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.695725 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.695813 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.695899 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.696068 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.812135 1483118 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:13.817944 1483118 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:13.965641 1483118 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:13.973263 1483118 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:13.973433 1483118 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:13.991077 1483118 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:13.991112 1483118 start.go:475] detecting cgroup driver to use...
	I1225 13:26:13.991197 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:14.005649 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:14.018464 1483118 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:14.018540 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:14.031361 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:14.046011 1483118 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:14.152826 1483118 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:14.281488 1483118 docker.go:219] disabling docker service ...
	I1225 13:26:14.281577 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:14.297584 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:14.311896 1483118 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:14.448141 1483118 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:14.583111 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:14.599419 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:14.619831 1483118 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:14.619909 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.631979 1483118 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:14.632065 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.643119 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.655441 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.666525 1483118 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:14.678080 1483118 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:14.687889 1483118 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:14.687957 1483118 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:14.702290 1483118 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:14.712225 1483118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:14.836207 1483118 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:15.019332 1483118 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:15.019424 1483118 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:15.024755 1483118 start.go:543] Will wait 60s for crictl version
	I1225 13:26:15.024844 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.028652 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:15.074415 1483118 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:15.074550 1483118 ssh_runner.go:195] Run: crio --version
	I1225 13:26:15.128559 1483118 ssh_runner.go:195] Run: crio --version
	I1225 13:26:15.178477 1483118 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1225 13:26:13.714488 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Start
	I1225 13:26:13.714708 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring networks are active...
	I1225 13:26:13.715513 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring network default is active
	I1225 13:26:13.715868 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring network mk-embed-certs-880612 is active
	I1225 13:26:13.716279 1483946 main.go:141] libmachine: (embed-certs-880612) Getting domain xml...
	I1225 13:26:13.716905 1483946 main.go:141] libmachine: (embed-certs-880612) Creating domain...
	I1225 13:26:15.049817 1483946 main.go:141] libmachine: (embed-certs-880612) Waiting to get IP...
	I1225 13:26:15.051040 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.051641 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.051756 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.051615 1484395 retry.go:31] will retry after 199.911042ms: waiting for machine to come up
	I1225 13:26:15.253158 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.260582 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.260620 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.260519 1484395 retry.go:31] will retry after 285.022636ms: waiting for machine to come up
	I1225 13:26:15.547290 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.547756 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.547787 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.547692 1484395 retry.go:31] will retry after 327.637369ms: waiting for machine to come up
	I1225 13:26:15.877618 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.878119 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.878153 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.878058 1484395 retry.go:31] will retry after 384.668489ms: waiting for machine to come up
	I1225 13:26:16.264592 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:16.265056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:16.265084 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:16.265005 1484395 retry.go:31] will retry after 468.984683ms: waiting for machine to come up
	I1225 13:26:15.180205 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:15.183372 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:15.183820 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:15.183862 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:15.184054 1483118 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:15.188935 1483118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:15.202790 1483118 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1225 13:26:15.202839 1483118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:15.245267 1483118 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1225 13:26:15.245297 1483118 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1225 13:26:15.245409 1483118 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.245430 1483118 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.245448 1483118 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.245467 1483118 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.245468 1483118 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1225 13:26:15.245534 1483118 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.245447 1483118 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.245404 1483118 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.247839 1483118 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.247850 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.247874 1483118 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.247911 1483118 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.247980 1483118 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1225 13:26:15.247984 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.248043 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.248281 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.404332 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.405729 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.407712 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1225 13:26:15.412419 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.413201 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.413349 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.453117 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.533541 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.536843 1483118 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1225 13:26:15.536896 1483118 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.536950 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.576965 1483118 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1225 13:26:15.577010 1483118 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.577078 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688643 1483118 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1225 13:26:15.688696 1483118 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.688710 1483118 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1225 13:26:15.688750 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688759 1483118 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.688765 1483118 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1225 13:26:15.688794 1483118 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.688813 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688835 1483118 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1225 13:26:15.688847 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688858 1483118 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.688869 1483118 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1225 13:26:15.688890 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688896 1483118 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.688910 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.688921 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688949 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.706288 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.779043 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1225 13:26:15.779170 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.779219 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.779219 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1225 13:26:15.779181 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.779297 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:15.779309 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.779274 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.779439 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:15.779507 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:15.864891 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:15.865017 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:15.884972 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1225 13:26:15.885024 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:15.885035 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1225 13:26:15.885045 1483118 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.885091 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.885094 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:15.885109 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:15.885146 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1225 13:26:15.885167 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1225 13:26:15.885229 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:15.885239 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1225 13:26:15.885273 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1225 13:26:15.898753 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1225 13:26:17.966777 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.08165399s)
	I1225 13:26:17.966822 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1225 13:26:17.966836 1483118 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.081714527s)
	I1225 13:26:17.966865 1483118 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.081735795s)
	I1225 13:26:17.966848 1483118 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:17.966894 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1225 13:26:17.966874 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1225 13:26:17.966936 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:16.736013 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:16.736519 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:16.736553 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:16.736449 1484395 retry.go:31] will retry after 873.004128ms: waiting for machine to come up
	I1225 13:26:17.611675 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:17.612135 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:17.612160 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:17.612085 1484395 retry.go:31] will retry after 1.093577821s: waiting for machine to come up
	I1225 13:26:18.707411 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:18.707936 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:18.707994 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:18.707904 1484395 retry.go:31] will retry after 1.364130049s: waiting for machine to come up
	I1225 13:26:20.074559 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:20.075102 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:20.075135 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:20.075033 1484395 retry.go:31] will retry after 1.740290763s: waiting for machine to come up
	I1225 13:26:21.677915 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.710943608s)
	I1225 13:26:21.677958 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1225 13:26:21.677990 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:21.678050 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:23.630977 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.952875837s)
	I1225 13:26:23.631018 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1225 13:26:23.631051 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:23.631112 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:21.818166 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:21.818695 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:21.818728 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:21.818641 1484395 retry.go:31] will retry after 2.035498479s: waiting for machine to come up
	I1225 13:26:23.856368 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:23.857094 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:23.857120 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:23.856997 1484395 retry.go:31] will retry after 2.331127519s: waiting for machine to come up
	I1225 13:26:26.191862 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:26.192571 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:26.192608 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:26.192513 1484395 retry.go:31] will retry after 3.191632717s: waiting for machine to come up
	I1225 13:26:26.193816 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.56267278s)
	I1225 13:26:26.193849 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1225 13:26:26.193884 1483118 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:26.193951 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:27.342879 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.148892619s)
	I1225 13:26:27.342913 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1225 13:26:27.342948 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:27.343014 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:29.909035 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.565991605s)
	I1225 13:26:29.909080 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1225 13:26:29.909105 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:29.909159 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:29.386007 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:29.386335 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:29.386366 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:29.386294 1484395 retry.go:31] will retry after 3.786228584s: waiting for machine to come up
	I1225 13:26:34.439583 1484104 start.go:369] acquired machines lock for "default-k8s-diff-port-344803" in 1m24.461830001s
	I1225 13:26:34.439666 1484104 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:34.439686 1484104 fix.go:54] fixHost starting: 
	I1225 13:26:34.440164 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:34.440230 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:34.457403 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46037
	I1225 13:26:34.457867 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:34.458351 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:26:34.458422 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:34.458748 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:34.458989 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:34.459176 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:26:34.460975 1484104 fix.go:102] recreateIfNeeded on default-k8s-diff-port-344803: state=Stopped err=<nil>
	I1225 13:26:34.461008 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	W1225 13:26:34.461188 1484104 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:34.463715 1484104 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-344803" ...
	I1225 13:26:34.465022 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Start
	I1225 13:26:34.465274 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring networks are active...
	I1225 13:26:34.466182 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring network default is active
	I1225 13:26:34.466565 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring network mk-default-k8s-diff-port-344803 is active
	I1225 13:26:34.466922 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Getting domain xml...
	I1225 13:26:34.467691 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Creating domain...
	I1225 13:26:32.065345 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.15614946s)
	I1225 13:26:32.065380 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1225 13:26:32.065414 1483118 cache_images.go:123] Successfully loaded all cached images
	I1225 13:26:32.065421 1483118 cache_images.go:92] LoadImages completed in 16.820112197s
	I1225 13:26:32.065498 1483118 ssh_runner.go:195] Run: crio config
	I1225 13:26:32.120989 1483118 cni.go:84] Creating CNI manager for ""
	I1225 13:26:32.121019 1483118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:32.121045 1483118 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:26:32.121063 1483118 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.232 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-330063 NodeName:no-preload-330063 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:26:32.121216 1483118 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-330063"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:26:32.121297 1483118 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-330063 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-330063 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:26:32.121357 1483118 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1225 13:26:32.132569 1483118 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:26:32.132677 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:26:32.142052 1483118 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1225 13:26:32.158590 1483118 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1225 13:26:32.174761 1483118 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1225 13:26:32.191518 1483118 ssh_runner.go:195] Run: grep 192.168.72.232	control-plane.minikube.internal$ /etc/hosts
	I1225 13:26:32.195353 1483118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:32.206845 1483118 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063 for IP: 192.168.72.232
	I1225 13:26:32.206879 1483118 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:32.207098 1483118 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:26:32.207145 1483118 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:26:32.207212 1483118 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.key
	I1225 13:26:32.207270 1483118 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.key.4e9d87c6
	I1225 13:26:32.207323 1483118 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.key
	I1225 13:26:32.207437 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:26:32.207465 1483118 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:26:32.207475 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:26:32.207513 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:26:32.207539 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:26:32.207565 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:26:32.207607 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:32.208427 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:26:32.231142 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:26:32.253335 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:26:32.275165 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:26:32.297762 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:26:32.320671 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:26:32.344125 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:26:32.368066 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:26:32.390688 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:26:32.412849 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:26:32.435445 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:26:32.457687 1483118 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:26:32.474494 1483118 ssh_runner.go:195] Run: openssl version
	I1225 13:26:32.480146 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:26:32.491141 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.495831 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.495902 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.501393 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:26:32.511643 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:26:32.521843 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.526421 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.526514 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.531988 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:26:32.542920 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:26:32.553604 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.558381 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.558478 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.563913 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:26:32.574591 1483118 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:26:32.579046 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:26:32.584821 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:26:32.590781 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:26:32.596456 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:26:32.601978 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:26:32.607981 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:26:32.613785 1483118 kubeadm.go:404] StartCluster: {Name:no-preload-330063 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-330063 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:26:32.613897 1483118 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:26:32.613955 1483118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:32.651782 1483118 cri.go:89] found id: ""
	I1225 13:26:32.651858 1483118 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:26:32.664617 1483118 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:26:32.664648 1483118 kubeadm.go:636] restartCluster start
	I1225 13:26:32.664710 1483118 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:26:32.674727 1483118 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:32.676090 1483118 kubeconfig.go:92] found "no-preload-330063" server: "https://192.168.72.232:8443"
	I1225 13:26:32.679085 1483118 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:26:32.689716 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:32.689824 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:32.702305 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.189843 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:33.189955 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:33.202514 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.689935 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:33.690048 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:33.703975 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:34.190601 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:34.190722 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:34.203987 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:34.690505 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:34.690639 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:34.701704 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.173890 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.174349 1483946 main.go:141] libmachine: (embed-certs-880612) Found IP for machine: 192.168.50.179
	I1225 13:26:33.174372 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has current primary IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.174405 1483946 main.go:141] libmachine: (embed-certs-880612) Reserving static IP address...
	I1225 13:26:33.174805 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "embed-certs-880612", mac: "52:54:00:a2:ab:67", ip: "192.168.50.179"} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.174845 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | skip adding static IP to network mk-embed-certs-880612 - found existing host DHCP lease matching {name: "embed-certs-880612", mac: "52:54:00:a2:ab:67", ip: "192.168.50.179"}
	I1225 13:26:33.174860 1483946 main.go:141] libmachine: (embed-certs-880612) Reserved static IP address: 192.168.50.179
	I1225 13:26:33.174877 1483946 main.go:141] libmachine: (embed-certs-880612) Waiting for SSH to be available...
	I1225 13:26:33.174892 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Getting to WaitForSSH function...
	I1225 13:26:33.177207 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.177579 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.177609 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.177711 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Using SSH client type: external
	I1225 13:26:33.177737 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa (-rw-------)
	I1225 13:26:33.177777 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:33.177790 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | About to run SSH command:
	I1225 13:26:33.177803 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | exit 0
	I1225 13:26:33.274328 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:33.274736 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetConfigRaw
	I1225 13:26:33.275462 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:33.278056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.278429 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.278483 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.278725 1483946 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/config.json ...
	I1225 13:26:33.278982 1483946 machine.go:88] provisioning docker machine ...
	I1225 13:26:33.279013 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:33.279236 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.279448 1483946 buildroot.go:166] provisioning hostname "embed-certs-880612"
	I1225 13:26:33.279468 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.279619 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.281930 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.282277 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.282311 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.282474 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.282704 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.282885 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.283033 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.283194 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.283700 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.283723 1483946 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880612 && echo "embed-certs-880612" | sudo tee /etc/hostname
	I1225 13:26:33.433456 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880612
	
	I1225 13:26:33.433483 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.436392 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.436794 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.436840 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.437004 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.437233 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.437446 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.437595 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.437783 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.438112 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.438134 1483946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880612/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:33.579776 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:33.579813 1483946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:33.579845 1483946 buildroot.go:174] setting up certificates
	I1225 13:26:33.579859 1483946 provision.go:83] configureAuth start
	I1225 13:26:33.579874 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.580151 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:33.582843 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.583233 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.583266 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.583461 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.585844 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.586216 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.586253 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.586454 1483946 provision.go:138] copyHostCerts
	I1225 13:26:33.586532 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:33.586548 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:33.586604 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:33.586692 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:33.586704 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:33.586723 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:33.586771 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:33.586778 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:33.586797 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:33.586837 1483946 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880612 san=[192.168.50.179 192.168.50.179 localhost 127.0.0.1 minikube embed-certs-880612]
	I1225 13:26:33.640840 1483946 provision.go:172] copyRemoteCerts
	I1225 13:26:33.640921 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:33.640951 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.643970 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.644390 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.644419 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.644606 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.644877 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.645065 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.645204 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:33.744907 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:33.769061 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1225 13:26:33.792125 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:26:33.816115 1483946 provision.go:86] duration metric: configureAuth took 236.215977ms
	I1225 13:26:33.816159 1483946 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:33.816373 1483946 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:26:33.816497 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.819654 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.820075 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.820108 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.820287 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.820519 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.820738 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.820873 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.821068 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.821403 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.821428 1483946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:34.159844 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:34.159882 1483946 machine.go:91] provisioned docker machine in 880.882549ms
	I1225 13:26:34.159897 1483946 start.go:300] post-start starting for "embed-certs-880612" (driver="kvm2")
	I1225 13:26:34.159934 1483946 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:34.159964 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.160327 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:34.160358 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.163009 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.163367 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.163400 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.163600 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.163801 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.163943 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.164093 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.261072 1483946 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:34.265655 1483946 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:34.265686 1483946 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:34.265777 1483946 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:34.265881 1483946 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:34.265996 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:34.276013 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:34.299731 1483946 start.go:303] post-start completed in 139.812994ms
	I1225 13:26:34.299783 1483946 fix.go:56] fixHost completed within 20.612345515s
	I1225 13:26:34.299813 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.302711 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.303189 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.303229 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.303363 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.303617 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.303856 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.304000 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.304198 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:34.304522 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:34.304535 1483946 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:34.439399 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510794.384723199
	
	I1225 13:26:34.439426 1483946 fix.go:206] guest clock: 1703510794.384723199
	I1225 13:26:34.439433 1483946 fix.go:219] Guest: 2023-12-25 13:26:34.384723199 +0000 UTC Remote: 2023-12-25 13:26:34.29978875 +0000 UTC m=+107.780041384 (delta=84.934449ms)
	I1225 13:26:34.439468 1483946 fix.go:190] guest clock delta is within tolerance: 84.934449ms
	I1225 13:26:34.439475 1483946 start.go:83] releasing machines lock for "embed-certs-880612", held for 20.75208465s
	I1225 13:26:34.439518 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.439832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:34.442677 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.443002 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.443031 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.443219 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.443827 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.444029 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.444168 1483946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:34.444225 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.444259 1483946 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:34.444295 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.447106 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447136 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447497 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.447533 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.447553 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447571 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447677 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.447719 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.447860 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.447904 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.447982 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.448094 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.448170 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.448219 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.572590 1483946 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:34.578648 1483946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:34.723874 1483946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:34.731423 1483946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:34.731495 1483946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:34.752447 1483946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:34.752478 1483946 start.go:475] detecting cgroup driver to use...
	I1225 13:26:34.752539 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:34.766782 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:34.781457 1483946 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:34.781548 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:34.798097 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:34.813743 1483946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:34.936843 1483946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:35.053397 1483946 docker.go:219] disabling docker service ...
	I1225 13:26:35.053478 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:35.067702 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:35.079670 1483946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:35.213241 1483946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:35.346105 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:35.359207 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:35.377259 1483946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:35.377347 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.388026 1483946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:35.388116 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.398180 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.411736 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.425888 1483946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:35.436586 1483946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:35.446969 1483946 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:35.447028 1483946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:35.461401 1483946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:35.471896 1483946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:35.619404 1483946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:35.825331 1483946 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:35.825410 1483946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:35.830699 1483946 start.go:543] Will wait 60s for crictl version
	I1225 13:26:35.830779 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:26:35.834938 1483946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:35.874595 1483946 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:35.874717 1483946 ssh_runner.go:195] Run: crio --version
	I1225 13:26:35.924227 1483946 ssh_runner.go:195] Run: crio --version
	I1225 13:26:35.982707 1483946 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 13:26:35.984401 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:35.987241 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:35.987669 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:35.987708 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:35.987991 1483946 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:35.992383 1483946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:36.004918 1483946 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:26:36.005000 1483946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:36.053783 1483946 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 13:26:36.053887 1483946 ssh_runner.go:195] Run: which lz4
	I1225 13:26:36.058040 1483946 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:26:36.062730 1483946 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:26:36.062785 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 13:26:35.824151 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting to get IP...
	I1225 13:26:35.825061 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:35.825643 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:35.825741 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:35.825605 1484550 retry.go:31] will retry after 292.143168ms: waiting for machine to come up
	I1225 13:26:36.119220 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.119741 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.119787 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.119666 1484550 retry.go:31] will retry after 250.340048ms: waiting for machine to come up
	I1225 13:26:36.372343 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.372894 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.372932 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.372840 1484550 retry.go:31] will retry after 434.335692ms: waiting for machine to come up
	I1225 13:26:36.808477 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.809037 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.809070 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.808999 1484550 retry.go:31] will retry after 455.184367ms: waiting for machine to come up
	I1225 13:26:37.265791 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.266330 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.266364 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:37.266278 1484550 retry.go:31] will retry after 487.994897ms: waiting for machine to come up
	I1225 13:26:37.756220 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.756745 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.756774 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:37.756699 1484550 retry.go:31] will retry after 817.108831ms: waiting for machine to come up
	I1225 13:26:38.575846 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:38.576271 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:38.576301 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:38.576222 1484550 retry.go:31] will retry after 1.022104679s: waiting for machine to come up
	I1225 13:26:39.600386 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:39.600863 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:39.600901 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:39.600796 1484550 retry.go:31] will retry after 1.318332419s: waiting for machine to come up
	I1225 13:26:35.190721 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:35.190828 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:35.203971 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:35.689934 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:35.690032 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:35.701978 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:36.190256 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:36.190355 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:36.204476 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:36.689969 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:36.690062 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:36.706632 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.189808 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:37.189921 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:37.203895 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.690391 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:37.690499 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:37.704914 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:38.190575 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:38.190694 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:38.208546 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:38.690090 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:38.690260 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:38.701827 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:39.190421 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:39.190549 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:39.202377 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:39.689978 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:39.690104 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:39.706511 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.963805 1483946 crio.go:444] Took 1.905809 seconds to copy over tarball
	I1225 13:26:37.963892 1483946 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:26:40.988182 1483946 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.024256156s)
	I1225 13:26:40.988214 1483946 crio.go:451] Took 3.024377 seconds to extract the tarball
	I1225 13:26:40.988225 1483946 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:26:41.030256 1483946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:41.085117 1483946 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 13:26:41.085147 1483946 cache_images.go:84] Images are preloaded, skipping loading
	I1225 13:26:41.085236 1483946 ssh_runner.go:195] Run: crio config
	I1225 13:26:41.149962 1483946 cni.go:84] Creating CNI manager for ""
	I1225 13:26:41.149993 1483946 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:41.150020 1483946 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:26:41.150044 1483946 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.179 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880612 NodeName:embed-certs-880612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:26:41.150237 1483946 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880612"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:26:41.150312 1483946 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-880612 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-880612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:26:41.150367 1483946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 13:26:41.160557 1483946 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:26:41.160681 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:26:41.170564 1483946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1225 13:26:41.187315 1483946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:26:41.204638 1483946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1225 13:26:41.222789 1483946 ssh_runner.go:195] Run: grep 192.168.50.179	control-plane.minikube.internal$ /etc/hosts
	I1225 13:26:41.226604 1483946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:41.238315 1483946 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612 for IP: 192.168.50.179
	I1225 13:26:41.238363 1483946 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:41.238614 1483946 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:26:41.238665 1483946 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:26:41.238768 1483946 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/client.key
	I1225 13:26:41.238860 1483946 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.key.518daada
	I1225 13:26:41.238925 1483946 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.key
	I1225 13:26:41.239060 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:26:41.239098 1483946 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:26:41.239122 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:26:41.239167 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:26:41.239204 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:26:41.239245 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:26:41.239300 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:41.240235 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:26:41.265422 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:26:41.290398 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:26:41.315296 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:26:41.339984 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:26:41.363071 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:26:41.392035 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:26:41.419673 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:26:41.444242 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:26:41.468314 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:26:41.493811 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:26:41.518255 1483946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:26:41.535605 1483946 ssh_runner.go:195] Run: openssl version
	I1225 13:26:41.541254 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:26:41.551784 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.556610 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.556686 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.562299 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:26:41.572173 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:26:40.921702 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:40.922293 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:40.922335 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:40.922225 1484550 retry.go:31] will retry after 1.835505717s: waiting for machine to come up
	I1225 13:26:42.760187 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:42.760688 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:42.760714 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:42.760625 1484550 retry.go:31] will retry after 1.646709972s: waiting for machine to come up
	I1225 13:26:44.409540 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:44.410023 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:44.410064 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:44.409998 1484550 retry.go:31] will retry after 1.922870398s: waiting for machine to come up
	I1225 13:26:40.190712 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:40.190831 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:40.205624 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:40.690729 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:40.690835 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:40.702671 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.190145 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.190234 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.201991 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.690585 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.690683 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.704041 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.190633 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.190745 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.202086 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.690049 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.690177 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.701556 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.701597 1483118 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:26:42.701611 1483118 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:26:42.701635 1483118 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:26:42.701719 1483118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:42.745733 1483118 cri.go:89] found id: ""
	I1225 13:26:42.745835 1483118 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:26:42.761355 1483118 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:26:42.773734 1483118 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:26:42.773812 1483118 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:42.785213 1483118 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:42.785242 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:42.927378 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:43.715163 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:43.934803 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:44.024379 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:44.106069 1483118 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:26:44.106200 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:44.607243 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:41.582062 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.692062 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.692156 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.698498 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:26:41.709171 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:26:41.719597 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.724562 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.724628 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.730571 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:26:41.740854 1483946 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:26:41.745792 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:26:41.752228 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:26:41.758318 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:26:41.764486 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:26:41.770859 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:26:41.777155 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:26:41.783382 1483946 kubeadm.go:404] StartCluster: {Name:embed-certs-880612 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-880612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:26:41.783493 1483946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:26:41.783557 1483946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:41.827659 1483946 cri.go:89] found id: ""
	I1225 13:26:41.827738 1483946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:26:41.837713 1483946 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:26:41.837740 1483946 kubeadm.go:636] restartCluster start
	I1225 13:26:41.837788 1483946 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:26:41.846668 1483946 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.847773 1483946 kubeconfig.go:92] found "embed-certs-880612" server: "https://192.168.50.179:8443"
	I1225 13:26:41.850105 1483946 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:26:41.859124 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.859196 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.870194 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.359810 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.359906 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.371508 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.860078 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.860167 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.876302 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:43.359657 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:43.359761 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:43.376765 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:43.859950 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:43.860067 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:43.878778 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:44.359355 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:44.359439 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:44.371780 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:44.859294 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:44.859429 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:44.872286 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:45.359315 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:45.359438 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:45.375926 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:45.859453 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:45.859560 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:45.875608 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:46.360180 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:46.360335 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:46.376143 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:46.335832 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:46.336405 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:46.336439 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:46.336342 1484550 retry.go:31] will retry after 2.75487061s: waiting for machine to come up
	I1225 13:26:49.092529 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:49.092962 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:49.092986 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:49.092926 1484550 retry.go:31] will retry after 4.456958281s: waiting for machine to come up
	I1225 13:26:45.106806 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:45.607205 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.106726 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.606675 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.628821 1483118 api_server.go:72] duration metric: took 2.522750929s to wait for apiserver process to appear ...
	I1225 13:26:46.628852 1483118 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:26:46.628878 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:46.629487 1483118 api_server.go:269] stopped: https://192.168.72.232:8443/healthz: Get "https://192.168.72.232:8443/healthz": dial tcp 192.168.72.232:8443: connect: connection refused
	I1225 13:26:47.129325 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:46.860130 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:46.860255 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:46.875574 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:47.360120 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:47.360254 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:47.375470 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:47.860119 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:47.860205 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:47.875015 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:48.359513 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:48.359649 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:48.374270 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:48.859330 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:48.859424 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:48.871789 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:49.359307 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:49.359403 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:49.371339 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:49.859669 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:49.859766 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:49.872882 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.359345 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:50.359455 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:50.370602 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.859148 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:50.859271 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:50.871042 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:51.359459 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:51.359544 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:51.371252 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.824734 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:26:50.824772 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:26:50.824789 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:50.996870 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:50.996923 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:51.129079 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:51.134132 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:51.134169 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:51.629263 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:51.635273 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:51.635305 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:52.129955 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:52.135538 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 200:
	ok
	I1225 13:26:52.144432 1483118 api_server.go:141] control plane version: v1.29.0-rc.2
	I1225 13:26:52.144470 1483118 api_server.go:131] duration metric: took 5.515610636s to wait for apiserver health ...
	I1225 13:26:52.144483 1483118 cni.go:84] Creating CNI manager for ""
	I1225 13:26:52.144491 1483118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:52.146289 1483118 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:26:52.147684 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:26:52.187156 1483118 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:26:52.210022 1483118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:26:52.225137 1483118 system_pods.go:59] 8 kube-system pods found
	I1225 13:26:52.225190 1483118 system_pods.go:61] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:26:52.225200 1483118 system_pods.go:61] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:26:52.225218 1483118 system_pods.go:61] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:26:52.225230 1483118 system_pods.go:61] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:26:52.225239 1483118 system_pods.go:61] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:26:52.225248 1483118 system_pods.go:61] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:26:52.225262 1483118 system_pods.go:61] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:26:52.225272 1483118 system_pods.go:61] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 13:26:52.225288 1483118 system_pods.go:74] duration metric: took 15.241676ms to wait for pod list to return data ...
	I1225 13:26:52.225300 1483118 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:26:52.229429 1483118 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:26:52.229471 1483118 node_conditions.go:123] node cpu capacity is 2
	I1225 13:26:52.229527 1483118 node_conditions.go:105] duration metric: took 4.217644ms to run NodePressure ...
	I1225 13:26:52.229549 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.630596 1483118 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:26:52.635810 1483118 kubeadm.go:787] kubelet initialised
	I1225 13:26:52.635835 1483118 kubeadm.go:788] duration metric: took 5.192822ms waiting for restarted kubelet to initialise ...
	I1225 13:26:52.635844 1483118 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:26:52.645095 1483118 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.652146 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "coredns-76f75df574-pwk9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.652181 1483118 pod_ready.go:81] duration metric: took 7.040805ms waiting for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.652194 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "coredns-76f75df574-pwk9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.652203 1483118 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.658310 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "etcd-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.658347 1483118 pod_ready.go:81] duration metric: took 6.126503ms waiting for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.658359 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "etcd-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.658369 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.663826 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-apiserver-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.663871 1483118 pod_ready.go:81] duration metric: took 5.492644ms waiting for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.663884 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-apiserver-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.663893 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.669098 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.669137 1483118 pod_ready.go:81] duration metric: took 5.230934ms waiting for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.669148 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.669157 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.035736 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-proxy-jbch6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.035782 1483118 pod_ready.go:81] duration metric: took 366.614624ms waiting for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.035796 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-proxy-jbch6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.035806 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.435089 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-scheduler-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.435123 1483118 pod_ready.go:81] duration metric: took 399.30822ms waiting for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.435135 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-scheduler-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.435145 1483118 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.835248 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.835280 1483118 pod_ready.go:81] duration metric: took 400.124904ms waiting for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.835290 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.835299 1483118 pod_ready.go:38] duration metric: took 1.199443126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:26:53.835317 1483118 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:26:53.848912 1483118 ops.go:34] apiserver oom_adj: -16
	I1225 13:26:53.848954 1483118 kubeadm.go:640] restartCluster took 21.184297233s
	I1225 13:26:53.848965 1483118 kubeadm.go:406] StartCluster complete in 21.235197323s
	I1225 13:26:53.849001 1483118 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:53.849140 1483118 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:26:53.851909 1483118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:53.852278 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:26:53.852353 1483118 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:26:53.852461 1483118 addons.go:69] Setting storage-provisioner=true in profile "no-preload-330063"
	I1225 13:26:53.852495 1483118 addons.go:237] Setting addon storage-provisioner=true in "no-preload-330063"
	W1225 13:26:53.852507 1483118 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:26:53.852514 1483118 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:26:53.852555 1483118 addons.go:69] Setting default-storageclass=true in profile "no-preload-330063"
	I1225 13:26:53.852579 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.852607 1483118 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-330063"
	I1225 13:26:53.852871 1483118 addons.go:69] Setting metrics-server=true in profile "no-preload-330063"
	I1225 13:26:53.852895 1483118 addons.go:237] Setting addon metrics-server=true in "no-preload-330063"
	W1225 13:26:53.852904 1483118 addons.go:246] addon metrics-server should already be in state true
	I1225 13:26:53.852948 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.852985 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.852985 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.853012 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.853012 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.853315 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.853361 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.858023 1483118 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-330063" context rescaled to 1 replicas
	I1225 13:26:53.858077 1483118 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:26:53.861368 1483118 out.go:177] * Verifying Kubernetes components...
	I1225 13:26:53.862819 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:26:53.870209 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
	I1225 13:26:53.870486 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I1225 13:26:53.870693 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.870807 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.871066 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I1225 13:26:53.871329 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.871341 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.871426 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.871433 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.871742 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.871770 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.872271 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.872308 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.872511 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.872896 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.872923 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.873167 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.873180 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.873549 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.873693 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.878043 1483118 addons.go:237] Setting addon default-storageclass=true in "no-preload-330063"
	W1225 13:26:53.878077 1483118 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:26:53.878117 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.878613 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.878657 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.891971 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I1225 13:26:53.892418 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.893067 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.893092 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.893461 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.893634 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.895563 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.897916 1483118 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:26:53.896007 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I1225 13:26:53.899799 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:26:53.899823 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:26:53.899858 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.900294 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.900987 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.901006 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.901451 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.901677 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.901677 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I1225 13:26:53.902344 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.902956 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.902981 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.903419 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.903917 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.903986 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.904022 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.904445 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.904452 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.904471 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.904615 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.904785 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.906582 1483118 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:53.551972 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.552449 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Found IP for machine: 192.168.61.39
	I1225 13:26:53.552500 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has current primary IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.552515 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Reserving static IP address...
	I1225 13:26:53.552918 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-344803", mac: "52:54:00:80:85:71", ip: "192.168.61.39"} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.552967 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | skip adding static IP to network mk-default-k8s-diff-port-344803 - found existing host DHCP lease matching {name: "default-k8s-diff-port-344803", mac: "52:54:00:80:85:71", ip: "192.168.61.39"}
	I1225 13:26:53.552990 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Reserved static IP address: 192.168.61.39
	I1225 13:26:53.553003 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for SSH to be available...
	I1225 13:26:53.553041 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Getting to WaitForSSH function...
	I1225 13:26:53.555272 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.555619 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.555654 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.555753 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Using SSH client type: external
	I1225 13:26:53.555785 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa (-rw-------)
	I1225 13:26:53.555828 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:53.555852 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | About to run SSH command:
	I1225 13:26:53.555872 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | exit 0
	I1225 13:26:53.642574 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:53.643094 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetConfigRaw
	I1225 13:26:53.643946 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:53.646842 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.647308 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.647351 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.647580 1484104 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/config.json ...
	I1225 13:26:53.647806 1484104 machine.go:88] provisioning docker machine ...
	I1225 13:26:53.647827 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:53.648054 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.648255 1484104 buildroot.go:166] provisioning hostname "default-k8s-diff-port-344803"
	I1225 13:26:53.648274 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.648485 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.650935 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.651291 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.651327 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.651479 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:53.651718 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.651887 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.652028 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:53.652213 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:53.652587 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:53.652605 1484104 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-344803 && echo "default-k8s-diff-port-344803" | sudo tee /etc/hostname
	I1225 13:26:53.782284 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-344803
	
	I1225 13:26:53.782315 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.785326 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.785631 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.785668 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.785876 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:53.786149 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.786374 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.786600 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:53.786806 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:53.787202 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:53.787222 1484104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-344803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-344803/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-344803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:53.916809 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:53.916844 1484104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:53.916870 1484104 buildroot.go:174] setting up certificates
	I1225 13:26:53.916882 1484104 provision.go:83] configureAuth start
	I1225 13:26:53.916900 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.917233 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:53.920048 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.920377 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.920402 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.920538 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.923177 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.923404 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.923437 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.923584 1484104 provision.go:138] copyHostCerts
	I1225 13:26:53.923666 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:53.923686 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:53.923763 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:53.923934 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:53.923947 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:53.923978 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:53.924078 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:53.924088 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:53.924115 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:53.924207 1484104 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-344803 san=[192.168.61.39 192.168.61.39 localhost 127.0.0.1 minikube default-k8s-diff-port-344803]
	I1225 13:26:54.014673 1484104 provision.go:172] copyRemoteCerts
	I1225 13:26:54.014739 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:54.014772 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.018361 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.018849 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.018924 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.019089 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.019351 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.019559 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.019949 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.120711 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:54.155907 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1225 13:26:54.192829 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 13:26:54.227819 1484104 provision.go:86] duration metric: configureAuth took 310.912829ms
	I1225 13:26:54.227853 1484104 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:54.228119 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:26:54.228236 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.232535 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.232580 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.232628 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.232945 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.233215 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.233422 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.233608 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.233801 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:54.234295 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:54.234322 1484104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:54.638656 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:54.638772 1484104 machine.go:91] provisioned docker machine in 990.950916ms
	I1225 13:26:54.638798 1484104 start.go:300] post-start starting for "default-k8s-diff-port-344803" (driver="kvm2")
	I1225 13:26:54.638821 1484104 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:54.638883 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.639341 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:54.639383 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.643369 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.643810 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.643863 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.644140 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.644444 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.644624 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.644774 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.740189 1484104 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:54.745972 1484104 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:54.746009 1484104 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:54.746104 1484104 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:54.746229 1484104 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:54.746362 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:54.758199 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:54.794013 1484104 start.go:303] post-start completed in 155.186268ms
	I1225 13:26:54.794048 1484104 fix.go:56] fixHost completed within 20.354368879s
	I1225 13:26:54.794077 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.797620 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.798092 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.798129 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.798423 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.798692 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.798900 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.799067 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.799293 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:54.799807 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:54.799829 1484104 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:54.933026 1482618 start.go:369] acquired machines lock for "old-k8s-version-198979" in 59.553202424s
	I1225 13:26:54.933097 1482618 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:54.933105 1482618 fix.go:54] fixHost starting: 
	I1225 13:26:54.933577 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:54.933620 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:54.956206 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I1225 13:26:54.956801 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:54.958396 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:26:54.958425 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:54.958887 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:54.959164 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:26:54.959384 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:26:54.961270 1482618 fix.go:102] recreateIfNeeded on old-k8s-version-198979: state=Stopped err=<nil>
	I1225 13:26:54.961305 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	W1225 13:26:54.961494 1482618 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:54.963775 1482618 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-198979" ...
	I1225 13:26:53.904908 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.908114 1483118 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:26:53.908130 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:26:53.908147 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.908370 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:53.912254 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.912861 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.912885 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.913096 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.913324 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.913510 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.913629 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:53.959638 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
	I1225 13:26:53.960190 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.960890 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.960913 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.961320 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.961603 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.963927 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.964240 1483118 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:26:53.964262 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:26:53.964282 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.967614 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.968092 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.968155 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.968471 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.968679 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.968879 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.969040 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:54.064639 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:26:54.064674 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:26:54.093609 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:26:54.147415 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:26:54.147449 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:26:54.148976 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:26:54.160381 1483118 node_ready.go:35] waiting up to 6m0s for node "no-preload-330063" to be "Ready" ...
	I1225 13:26:54.160490 1483118 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:26:54.202209 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:26:54.202242 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:26:54.276251 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:26:54.965270 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Start
	I1225 13:26:54.965680 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring networks are active...
	I1225 13:26:54.966477 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring network default is active
	I1225 13:26:54.966919 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring network mk-old-k8s-version-198979 is active
	I1225 13:26:54.967420 1482618 main.go:141] libmachine: (old-k8s-version-198979) Getting domain xml...
	I1225 13:26:54.968585 1482618 main.go:141] libmachine: (old-k8s-version-198979) Creating domain...
	I1225 13:26:55.590940 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.497275379s)
	I1225 13:26:55.591005 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591020 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.591108 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.442107411s)
	I1225 13:26:55.591127 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591136 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.591247 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.314957717s)
	I1225 13:26:55.591268 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591280 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.595765 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.595838 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.595847 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.595859 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.595867 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596016 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596049 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596058 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596067 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.596075 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596177 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596218 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596226 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596236 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.596244 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596485 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596515 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596929 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596972 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596979 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596990 1483118 addons.go:473] Verifying addon metrics-server=true in "no-preload-330063"
	I1225 13:26:55.597032 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.597067 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.597076 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.610755 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.610788 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.611238 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.611264 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.613767 1483118 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1225 13:26:51.859989 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:51.860081 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:51.871647 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:51.871684 1483946 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:26:51.871709 1483946 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:26:51.871725 1483946 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:26:51.871817 1483946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:51.919587 1483946 cri.go:89] found id: ""
	I1225 13:26:51.919706 1483946 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:26:51.935341 1483946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:26:51.944522 1483946 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:26:51.944588 1483946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:51.954607 1483946 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:51.954637 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.092831 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.921485 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.161902 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.270786 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.340226 1483946 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:26:53.340331 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:53.841309 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:54.341486 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:54.841104 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.341409 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.841238 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.867371 1483946 api_server.go:72] duration metric: took 2.52714535s to wait for apiserver process to appear ...
	I1225 13:26:55.867406 1483946 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:26:55.867434 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:26:55.867970 1483946 api_server.go:269] stopped: https://192.168.50.179:8443/healthz: Get "https://192.168.50.179:8443/healthz": dial tcp 192.168.50.179:8443: connect: connection refused
	I1225 13:26:56.368335 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:26:54.932810 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510814.876127642
	
	I1225 13:26:54.932838 1484104 fix.go:206] guest clock: 1703510814.876127642
	I1225 13:26:54.932848 1484104 fix.go:219] Guest: 2023-12-25 13:26:54.876127642 +0000 UTC Remote: 2023-12-25 13:26:54.794053361 +0000 UTC m=+104.977714576 (delta=82.074281ms)
	I1225 13:26:54.932878 1484104 fix.go:190] guest clock delta is within tolerance: 82.074281ms
	I1225 13:26:54.932885 1484104 start.go:83] releasing machines lock for "default-k8s-diff-port-344803", held for 20.493256775s
	I1225 13:26:54.932920 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.933380 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:54.936626 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.937209 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.937262 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.937534 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938366 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938583 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938676 1484104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:54.938722 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.938826 1484104 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:54.938854 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.942392 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.942792 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.942831 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.943292 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.943487 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.943635 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.943764 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.943922 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.944870 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.945020 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.945066 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.945318 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.945498 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.945743 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:55.069674 1484104 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:55.078333 1484104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:55.247706 1484104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:55.256782 1484104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:55.256885 1484104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:55.278269 1484104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:55.278303 1484104 start.go:475] detecting cgroup driver to use...
	I1225 13:26:55.278383 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:55.302307 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:55.322161 1484104 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:55.322345 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:55.342241 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:55.361128 1484104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:55.547880 1484104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:55.693711 1484104 docker.go:219] disabling docker service ...
	I1225 13:26:55.693804 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:55.708058 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:55.721136 1484104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:55.890044 1484104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:56.042549 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:56.061359 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:56.086075 1484104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:56.086169 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.100059 1484104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:56.100162 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.113858 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.127589 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.140964 1484104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:56.155180 1484104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:56.167585 1484104 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:56.167716 1484104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:56.186467 1484104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:56.200044 1484104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:56.339507 1484104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:56.563294 1484104 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:56.563385 1484104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:56.570381 1484104 start.go:543] Will wait 60s for crictl version
	I1225 13:26:56.570477 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:26:56.575675 1484104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:56.617219 1484104 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:56.617322 1484104 ssh_runner.go:195] Run: crio --version
	I1225 13:26:56.679138 1484104 ssh_runner.go:195] Run: crio --version
	I1225 13:26:56.751125 1484104 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 13:26:56.752677 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:56.756612 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:56.757108 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:56.757142 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:56.757502 1484104 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:56.763739 1484104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:56.781952 1484104 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:26:56.782029 1484104 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:56.840852 1484104 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 13:26:56.840939 1484104 ssh_runner.go:195] Run: which lz4
	I1225 13:26:56.845412 1484104 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:26:56.850135 1484104 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:26:56.850181 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 13:26:58.731034 1484104 crio.go:444] Took 1.885656 seconds to copy over tarball
	I1225 13:26:58.731138 1484104 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:26:55.615056 1483118 addons.go:508] enable addons completed in 1.762702944s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1225 13:26:56.169115 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:58.665700 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:56.860066 1482618 main.go:141] libmachine: (old-k8s-version-198979) Waiting to get IP...
	I1225 13:26:56.860987 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:56.861644 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:56.861765 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:56.861626 1484760 retry.go:31] will retry after 198.102922ms: waiting for machine to come up
	I1225 13:26:57.061281 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.062001 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.062029 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.061907 1484760 retry.go:31] will retry after 299.469436ms: waiting for machine to come up
	I1225 13:26:57.362874 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.363385 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.363441 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.363363 1484760 retry.go:31] will retry after 460.796393ms: waiting for machine to come up
	I1225 13:26:57.826330 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.827065 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.827098 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.827021 1484760 retry.go:31] will retry after 397.690798ms: waiting for machine to come up
	I1225 13:26:58.226942 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:58.227490 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:58.227528 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:58.227437 1484760 retry.go:31] will retry after 731.798943ms: waiting for machine to come up
	I1225 13:26:58.960490 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:58.960969 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:58.961000 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:58.960930 1484760 retry.go:31] will retry after 577.614149ms: waiting for machine to come up
	I1225 13:26:59.540871 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:59.541581 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:59.541607 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:59.541494 1484760 retry.go:31] will retry after 1.177902051s: waiting for machine to come up
	I1225 13:27:00.799310 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.799355 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:00.799376 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:00.905272 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.905311 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:00.905330 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:00.922285 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.922324 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:01.367590 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:01.374093 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:01.374155 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.440592 1484104 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.709419632s)
	I1225 13:27:02.440624 1484104 crio.go:451] Took 3.709555 seconds to extract the tarball
	I1225 13:27:02.440636 1484104 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:27:02.504136 1484104 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:02.613720 1484104 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 13:27:02.613752 1484104 cache_images.go:84] Images are preloaded, skipping loading
	I1225 13:27:02.613839 1484104 ssh_runner.go:195] Run: crio config
	I1225 13:27:02.685414 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:27:02.685436 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:02.685459 1484104 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:27:02.685477 1484104 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.39 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-344803 NodeName:default-k8s-diff-port-344803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:27:02.685627 1484104 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.39
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-344803"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:27:02.685710 1484104 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-344803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1225 13:27:02.685778 1484104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 13:27:02.696327 1484104 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:27:02.696420 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:27:02.707369 1484104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1225 13:27:02.728181 1484104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:27:02.748934 1484104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1225 13:27:02.770783 1484104 ssh_runner.go:195] Run: grep 192.168.61.39	control-plane.minikube.internal$ /etc/hosts
	I1225 13:27:02.775946 1484104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:02.790540 1484104 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803 for IP: 192.168.61.39
	I1225 13:27:02.790590 1484104 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:02.790792 1484104 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:27:02.790862 1484104 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:27:02.790961 1484104 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.key
	I1225 13:27:02.859647 1484104 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.key.daee23f3
	I1225 13:27:02.859773 1484104 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.key
	I1225 13:27:02.859934 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:27:02.859993 1484104 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:27:02.860010 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:27:02.860037 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:27:02.860061 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:27:02.860082 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:27:02.860121 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:02.860871 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:27:02.889354 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1225 13:27:02.916983 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:27:02.943348 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:27:02.969940 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:27:02.996224 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:27:03.021662 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:27:03.052589 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:27:03.080437 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:27:03.107973 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:27:03.134921 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:27:03.161948 1484104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:27:03.184606 1484104 ssh_runner.go:195] Run: openssl version
	I1225 13:27:03.192305 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:27:03.204868 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.209793 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.209895 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.216568 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:27:03.229131 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:27:03.241634 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.247328 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.247397 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.253730 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:27:03.267063 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:27:03.281957 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.288393 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.288481 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.295335 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:27:03.307900 1484104 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:27:03.313207 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:27:03.319949 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:27:03.327223 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:27:03.333927 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:27:03.341434 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:27:03.349298 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:27:03.356303 1484104 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:27:03.356409 1484104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:27:03.356463 1484104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:03.407914 1484104 cri.go:89] found id: ""
	I1225 13:27:03.407991 1484104 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:27:03.418903 1484104 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:27:03.418928 1484104 kubeadm.go:636] restartCluster start
	I1225 13:27:03.418981 1484104 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:27:03.429758 1484104 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:03.431242 1484104 kubeconfig.go:92] found "default-k8s-diff-port-344803" server: "https://192.168.61.39:8444"
	I1225 13:27:03.433847 1484104 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:27:03.443564 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:03.443648 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:03.457188 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:03.943692 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:03.943806 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:03.956490 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:04.443680 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:04.443781 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:04.464817 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:00.671397 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:27:01.665347 1483118 node_ready.go:49] node "no-preload-330063" has status "Ready":"True"
	I1225 13:27:01.665383 1483118 node_ready.go:38] duration metric: took 7.504959726s waiting for node "no-preload-330063" to be "Ready" ...
	I1225 13:27:01.665398 1483118 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:01.675515 1483118 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.688377 1483118 pod_ready.go:92] pod "coredns-76f75df574-pwk9h" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:01.688467 1483118 pod_ready.go:81] duration metric: took 12.819049ms waiting for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.688492 1483118 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:03.697007 1483118 pod_ready.go:102] pod "etcd-no-preload-330063" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:04.379595 1483118 pod_ready.go:92] pod "etcd-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.379628 1483118 pod_ready.go:81] duration metric: took 2.691119222s waiting for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.379643 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.393427 1483118 pod_ready.go:92] pod "kube-apiserver-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.393459 1483118 pod_ready.go:81] duration metric: took 13.806505ms waiting for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.393473 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.454291 1483118 pod_ready.go:92] pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.454387 1483118 pod_ready.go:81] duration metric: took 60.903507ms waiting for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.454417 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.525436 1483118 pod_ready.go:92] pod "kube-proxy-jbch6" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.525471 1483118 pod_ready.go:81] duration metric: took 71.040817ms waiting for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.525486 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.546670 1483118 pod_ready.go:92] pod "kube-scheduler-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.546709 1483118 pod_ready.go:81] duration metric: took 21.213348ms waiting for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.546726 1483118 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.868308 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:01.913335 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:01.913393 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.367660 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:02.375382 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:02.375424 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.867590 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:02.873638 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:02.873680 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:03.368014 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:03.377785 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:03.377827 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:03.867933 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:03.873979 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:03.874013 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:04.367576 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:04.377835 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:04.377884 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:04.868444 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:04.879138 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:04.879187 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:05.367519 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:05.377570 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 200:
	ok
	I1225 13:27:05.388572 1483946 api_server.go:141] control plane version: v1.28.4
	I1225 13:27:05.388605 1483946 api_server.go:131] duration metric: took 9.521192442s to wait for apiserver health ...
	I1225 13:27:05.388615 1483946 cni.go:84] Creating CNI manager for ""
	I1225 13:27:05.388625 1483946 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:05.390592 1483946 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:00.720918 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:00.721430 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:00.721457 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:00.721380 1484760 retry.go:31] will retry after 931.125211ms: waiting for machine to come up
	I1225 13:27:01.654661 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:01.655341 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:01.655367 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:01.655263 1484760 retry.go:31] will retry after 1.333090932s: waiting for machine to come up
	I1225 13:27:02.991018 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:02.991520 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:02.991555 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:02.991468 1484760 retry.go:31] will retry after 2.006684909s: waiting for machine to come up
	I1225 13:27:05.000424 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:05.000972 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:05.001023 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:05.000908 1484760 retry.go:31] will retry after 2.72499386s: waiting for machine to come up
	I1225 13:27:05.391952 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:05.406622 1483946 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:05.429599 1483946 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:05.441614 1483946 system_pods.go:59] 9 kube-system pods found
	I1225 13:27:05.441681 1483946 system_pods.go:61] "coredns-5dd5756b68-4jqz4" [026524a6-1f73-4644-8a80-b276326178b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:05.441698 1483946 system_pods.go:61] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:05.441710 1483946 system_pods.go:61] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:27:05.441721 1483946 system_pods.go:61] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:27:05.441732 1483946 system_pods.go:61] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:27:05.441746 1483946 system_pods.go:61] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:27:05.441758 1483946 system_pods.go:61] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:27:05.441773 1483946 system_pods.go:61] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:27:05.441790 1483946 system_pods.go:61] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:27:05.441812 1483946 system_pods.go:74] duration metric: took 12.174684ms to wait for pod list to return data ...
	I1225 13:27:05.441824 1483946 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:05.447018 1483946 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:05.447064 1483946 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:05.447079 1483946 node_conditions.go:105] duration metric: took 5.247366ms to run NodePressure ...
	I1225 13:27:05.447106 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:05.767972 1483946 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:05.774281 1483946 kubeadm.go:787] kubelet initialised
	I1225 13:27:05.774307 1483946 kubeadm.go:788] duration metric: took 6.300121ms waiting for restarted kubelet to initialise ...
	I1225 13:27:05.774316 1483946 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:05.781474 1483946 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.789698 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.789732 1483946 pod_ready.go:81] duration metric: took 8.22748ms waiting for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.789746 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.789758 1483946 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.798517 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.798584 1483946 pod_ready.go:81] duration metric: took 8.811967ms waiting for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.798601 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.798612 1483946 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.804958 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "etcd-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.804998 1483946 pod_ready.go:81] duration metric: took 6.356394ms waiting for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.805018 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "etcd-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.805028 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.834502 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.834549 1483946 pod_ready.go:81] duration metric: took 29.510044ms waiting for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.834561 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.834571 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:06.234676 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.234728 1483946 pod_ready.go:81] duration metric: took 400.145957ms waiting for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:06.234742 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.234752 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:06.634745 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-proxy-677d7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.634785 1483946 pod_ready.go:81] duration metric: took 400.019189ms waiting for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:06.634798 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-proxy-677d7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.634807 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:07.034762 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.034793 1483946 pod_ready.go:81] duration metric: took 399.977148ms waiting for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:07.034803 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.034810 1483946 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:07.433932 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.433969 1483946 pod_ready.go:81] duration metric: took 399.14889ms waiting for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:07.433982 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.433992 1483946 pod_ready.go:38] duration metric: took 1.659666883s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:07.434016 1483946 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:27:07.448377 1483946 ops.go:34] apiserver oom_adj: -16
	I1225 13:27:07.448405 1483946 kubeadm.go:640] restartCluster took 25.610658268s
	I1225 13:27:07.448415 1483946 kubeadm.go:406] StartCluster complete in 25.665045171s
	I1225 13:27:07.448443 1483946 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:07.448530 1483946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:27:07.451369 1483946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:07.453102 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:27:07.453244 1483946 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:27:07.453332 1483946 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880612"
	I1225 13:27:07.453351 1483946 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-880612"
	W1225 13:27:07.453363 1483946 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:27:07.453432 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.453450 1483946 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:27:07.453516 1483946 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880612"
	I1225 13:27:07.453536 1483946 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880612"
	I1225 13:27:07.453860 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.453870 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.453902 1483946 addons.go:69] Setting metrics-server=true in profile "embed-certs-880612"
	I1225 13:27:07.453917 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.453925 1483946 addons.go:237] Setting addon metrics-server=true in "embed-certs-880612"
	W1225 13:27:07.454160 1483946 addons.go:246] addon metrics-server should already be in state true
	I1225 13:27:07.454211 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.453903 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.454601 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.454669 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.476508 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I1225 13:27:07.476720 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I1225 13:27:07.477202 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.477210 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.477794 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.477815 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.477957 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.477971 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.478407 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.478478 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.479041 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.479083 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.480350 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.483762 1483946 addons.go:237] Setting addon default-storageclass=true in "embed-certs-880612"
	W1225 13:27:07.483783 1483946 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:27:07.483816 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.484249 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.484285 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.489369 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I1225 13:27:07.489817 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.490332 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.490354 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.491339 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.494037 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.494083 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.501003 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I1225 13:27:07.501737 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.502399 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.502422 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.502882 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.503092 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.505387 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.507725 1483946 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:07.509099 1483946 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:07.509121 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:27:07.509153 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.513153 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.513923 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.513957 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.514226 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.514426 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.514610 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.515190 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.516933 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38615
	I1225 13:27:07.517681 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.518194 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.518220 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.518784 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1225 13:27:07.519309 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.519400 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.519930 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.519956 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.520525 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.520573 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.520819 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.521050 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.523074 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.525265 1483946 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:27:07.526542 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:27:07.526569 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:27:07.526598 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.530316 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.530846 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.530883 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.531223 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.531571 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.531832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.532070 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.544917 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I1225 13:27:07.545482 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.546037 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.546059 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.546492 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.546850 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.548902 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.549177 1483946 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:07.549196 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:27:07.549218 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.553036 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.553541 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.553572 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.553784 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.554642 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.554893 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.555581 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.676244 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:07.704310 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:07.718012 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:27:07.718043 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:27:07.779041 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:27:07.779073 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:27:07.786154 1483946 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:27:07.812338 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:07.812373 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:27:07.837795 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:07.974099 1483946 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-880612" context rescaled to 1 replicas
	I1225 13:27:07.974158 1483946 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:27:07.977116 1483946 out.go:177] * Verifying Kubernetes components...
	I1225 13:27:07.978618 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:27:09.163988 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.459630406s)
	I1225 13:27:09.164059 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164073 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164091 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.487803106s)
	I1225 13:27:09.164129 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164149 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164617 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.164624 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.164629 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.164639 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.164641 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.164651 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164653 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164661 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164666 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164622 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165025 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165095 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.165121 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.165172 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.165186 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.188483 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.188510 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.188847 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.188898 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.188906 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.193684 1483946 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.215023208s)
	I1225 13:27:09.193736 1483946 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880612" to be "Ready" ...
	I1225 13:27:09.193789 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.355953438s)
	I1225 13:27:09.193825 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.193842 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.194176 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.194192 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.194208 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.194219 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.195998 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.196000 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.196033 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.196044 1483946 addons.go:473] Verifying addon metrics-server=true in "embed-certs-880612"
	I1225 13:27:09.198211 1483946 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:27:04.943819 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:04.943958 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:04.960056 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:05.443699 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:05.443795 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:05.461083 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:05.943713 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:05.943821 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:05.960712 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.444221 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:06.444305 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:06.458894 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.944546 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:06.944630 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:06.958754 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:07.444332 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:07.444462 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:07.491468 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:07.943982 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:07.944135 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:07.960697 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:08.444285 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:08.444408 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:08.461209 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:08.943720 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:08.943866 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:08.959990 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:09.444604 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:09.444727 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:09.463020 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.556605 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:08.560748 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:07.728505 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:07.728994 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:07.729023 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:07.728936 1484760 retry.go:31] will retry after 2.39810797s: waiting for machine to come up
	I1225 13:27:10.129402 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:10.129925 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:10.129960 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:10.129860 1484760 retry.go:31] will retry after 4.278491095s: waiting for machine to come up
	I1225 13:27:09.199531 1483946 addons.go:508] enable addons completed in 1.746293071s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:27:11.199503 1483946 node_ready.go:49] node "embed-certs-880612" has status "Ready":"True"
	I1225 13:27:11.199529 1483946 node_ready.go:38] duration metric: took 2.005779632s waiting for node "embed-certs-880612" to be "Ready" ...
	I1225 13:27:11.199541 1483946 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:11.207447 1483946 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:09.943841 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:09.943948 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:09.960478 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:10.444037 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:10.444309 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:10.463480 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:10.943760 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:10.943886 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:10.960191 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:11.444602 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:11.444702 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:11.458181 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:11.943674 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:11.943783 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:11.956418 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:12.443719 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:12.443835 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:12.456707 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:12.944332 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:12.944434 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:12.957217 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:13.443965 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:13.444076 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:13.455968 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:13.456008 1484104 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:27:13.456051 1484104 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:27:13.456067 1484104 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:27:13.456145 1484104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:13.497063 1484104 cri.go:89] found id: ""
	I1225 13:27:13.497135 1484104 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:27:13.513279 1484104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:27:13.522816 1484104 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:27:13.522885 1484104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:13.532580 1484104 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:13.532612 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:13.668876 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:14.848056 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.179140695s)
	I1225 13:27:14.848090 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:11.072420 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:13.555685 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:14.413456 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:14.414013 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:14.414043 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:14.413960 1484760 retry.go:31] will retry after 4.470102249s: waiting for machine to come up
	I1225 13:27:11.714710 1483946 pod_ready.go:92] pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.714747 1483946 pod_ready.go:81] duration metric: took 507.263948ms waiting for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.714760 1483946 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.720448 1483946 pod_ready.go:92] pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.720472 1483946 pod_ready.go:81] duration metric: took 5.705367ms waiting for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.720481 1483946 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.725691 1483946 pod_ready.go:92] pod "etcd-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.725717 1483946 pod_ready.go:81] duration metric: took 5.229718ms waiting for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.725725 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.238949 1483946 pod_ready.go:92] pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.238979 1483946 pod_ready.go:81] duration metric: took 1.513246575s waiting for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.238992 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.244957 1483946 pod_ready.go:92] pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.244980 1483946 pod_ready.go:81] duration metric: took 5.981457ms waiting for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.244991 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.609255 1483946 pod_ready.go:92] pod "kube-proxy-677d7" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.609282 1483946 pod_ready.go:81] duration metric: took 364.285426ms waiting for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.609292 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.621505 1483946 pod_ready.go:92] pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:15.621540 1483946 pod_ready.go:81] duration metric: took 2.012239726s waiting for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.621553 1483946 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.047153 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:15.142405 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:15.237295 1484104 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:27:15.237406 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:15.737788 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:16.238003 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:16.738328 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:17.238494 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:17.738177 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:18.237676 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:18.259279 1484104 api_server.go:72] duration metric: took 3.021983877s to wait for apiserver process to appear ...
	I1225 13:27:18.259305 1484104 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:27:18.259331 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:15.555810 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:18.056361 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:18.888547 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.889138 1482618 main.go:141] libmachine: (old-k8s-version-198979) Found IP for machine: 192.168.39.186
	I1225 13:27:18.889167 1482618 main.go:141] libmachine: (old-k8s-version-198979) Reserving static IP address...
	I1225 13:27:18.889183 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has current primary IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.889631 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "old-k8s-version-198979", mac: "52:54:00:a1:03:69", ip: "192.168.39.186"} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.889672 1482618 main.go:141] libmachine: (old-k8s-version-198979) Reserved static IP address: 192.168.39.186
	I1225 13:27:18.889702 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | skip adding static IP to network mk-old-k8s-version-198979 - found existing host DHCP lease matching {name: "old-k8s-version-198979", mac: "52:54:00:a1:03:69", ip: "192.168.39.186"}
	I1225 13:27:18.889724 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Getting to WaitForSSH function...
	I1225 13:27:18.889741 1482618 main.go:141] libmachine: (old-k8s-version-198979) Waiting for SSH to be available...
	I1225 13:27:18.892133 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.892475 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.892509 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.892626 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Using SSH client type: external
	I1225 13:27:18.892658 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa (-rw-------)
	I1225 13:27:18.892688 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:27:18.892703 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | About to run SSH command:
	I1225 13:27:18.892722 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | exit 0
	I1225 13:27:18.991797 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | SSH cmd err, output: <nil>: 
	I1225 13:27:18.992203 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetConfigRaw
	I1225 13:27:18.992943 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:18.996016 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.996344 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.996416 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.996762 1482618 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/config.json ...
	I1225 13:27:18.996990 1482618 machine.go:88] provisioning docker machine ...
	I1225 13:27:18.997007 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:18.997254 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:18.997454 1482618 buildroot.go:166] provisioning hostname "old-k8s-version-198979"
	I1225 13:27:18.997483 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:18.997670 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.000725 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.001114 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.001144 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.001332 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.001504 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.001686 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.001836 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.002039 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.002592 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.002614 1482618 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-198979 && echo "old-k8s-version-198979" | sudo tee /etc/hostname
	I1225 13:27:19.148260 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-198979
	
	I1225 13:27:19.148291 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.151692 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.152160 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.152196 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.152350 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.152566 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.152743 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.152941 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.153133 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.153647 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.153678 1482618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-198979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-198979/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-198979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:27:19.294565 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:27:19.294606 1482618 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:27:19.294635 1482618 buildroot.go:174] setting up certificates
	I1225 13:27:19.294649 1482618 provision.go:83] configureAuth start
	I1225 13:27:19.294663 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:19.295039 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:19.298511 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.298933 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.298971 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.299137 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.302045 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.302486 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.302520 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.302682 1482618 provision.go:138] copyHostCerts
	I1225 13:27:19.302777 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:27:19.302806 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:27:19.302869 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:27:19.302994 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:27:19.303012 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:27:19.303042 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:27:19.303103 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:27:19.303113 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:27:19.303131 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:27:19.303177 1482618 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-198979 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube old-k8s-version-198979]
	I1225 13:27:19.444049 1482618 provision.go:172] copyRemoteCerts
	I1225 13:27:19.444142 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:27:19.444180 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.447754 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.448141 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.448174 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.448358 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.448593 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.448818 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.448994 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:19.545298 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:27:19.576678 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:27:19.604520 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1225 13:27:19.631640 1482618 provision.go:86] duration metric: configureAuth took 336.975454ms
	I1225 13:27:19.631674 1482618 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:27:19.631899 1482618 config.go:182] Loaded profile config "old-k8s-version-198979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1225 13:27:19.632012 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.635618 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.636130 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.636166 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.636644 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.636903 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.637088 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.637315 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.637511 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.638005 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.638040 1482618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:27:19.990807 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:27:19.990844 1482618 machine.go:91] provisioned docker machine in 993.840927ms
	I1225 13:27:19.990857 1482618 start.go:300] post-start starting for "old-k8s-version-198979" (driver="kvm2")
	I1225 13:27:19.990870 1482618 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:27:19.990908 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:19.991349 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:27:19.991388 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.994622 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.994980 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.995015 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.995147 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.995402 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.995574 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.995713 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.089652 1482618 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:27:20.094575 1482618 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:27:20.094611 1482618 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:27:20.094716 1482618 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:27:20.094856 1482618 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:27:20.095010 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:27:20.105582 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:20.133802 1482618 start.go:303] post-start completed in 142.928836ms
	I1225 13:27:20.133830 1482618 fix.go:56] fixHost completed within 25.200724583s
	I1225 13:27:20.133860 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.137215 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.137635 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.137670 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.137839 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.138081 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.138322 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.138518 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.138732 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:20.139194 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:20.139228 1482618 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:27:20.268572 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510840.203941272
	
	I1225 13:27:20.268602 1482618 fix.go:206] guest clock: 1703510840.203941272
	I1225 13:27:20.268613 1482618 fix.go:219] Guest: 2023-12-25 13:27:20.203941272 +0000 UTC Remote: 2023-12-25 13:27:20.133835417 +0000 UTC m=+384.781536006 (delta=70.105855ms)
	I1225 13:27:20.268641 1482618 fix.go:190] guest clock delta is within tolerance: 70.105855ms
	I1225 13:27:20.268651 1482618 start.go:83] releasing machines lock for "old-k8s-version-198979", held for 25.335582747s
	I1225 13:27:20.268683 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.268981 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:20.272181 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.272626 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.272666 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.272948 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273612 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273851 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273925 1482618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:27:20.273990 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.274108 1482618 ssh_runner.go:195] Run: cat /version.json
	I1225 13:27:20.274133 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.277090 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277381 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277568 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.277608 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277839 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.278041 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.278066 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.278085 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.278284 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.278293 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.278500 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.278516 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.278691 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.278852 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.395858 1482618 ssh_runner.go:195] Run: systemctl --version
	I1225 13:27:20.403417 1482618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:27:17.629846 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:19.635250 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:20.559485 1482618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:27:20.566356 1482618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:27:20.566487 1482618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:27:20.584531 1482618 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:27:20.584565 1482618 start.go:475] detecting cgroup driver to use...
	I1225 13:27:20.584648 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:27:20.599889 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:27:20.613197 1482618 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:27:20.613278 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:27:20.626972 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:27:20.640990 1482618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:27:20.752941 1482618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:27:20.886880 1482618 docker.go:219] disabling docker service ...
	I1225 13:27:20.886971 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:27:20.903143 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:27:20.919083 1482618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:27:21.042116 1482618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:27:21.171997 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:27:21.185237 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:27:21.204711 1482618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1225 13:27:21.204787 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.215196 1482618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:27:21.215276 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.226411 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.239885 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.250576 1482618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:27:21.263723 1482618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:27:21.274356 1482618 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:27:21.274462 1482618 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:27:21.288126 1482618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:27:21.300772 1482618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:27:21.467651 1482618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:27:21.700509 1482618 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:27:21.700618 1482618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:27:21.708118 1482618 start.go:543] Will wait 60s for crictl version
	I1225 13:27:21.708207 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:21.712687 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:27:21.768465 1482618 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:27:21.768563 1482618 ssh_runner.go:195] Run: crio --version
	I1225 13:27:21.836834 1482618 ssh_runner.go:195] Run: crio --version
	I1225 13:27:21.907627 1482618 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1225 13:27:21.288635 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:21.288669 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:21.288685 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:21.374966 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:21.375010 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:21.760268 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:21.771864 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:21.771898 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:22.259417 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:22.271720 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:22.271779 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:22.760217 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:22.767295 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:22.767333 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:23.259377 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:23.265348 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 200:
	ok
	I1225 13:27:23.275974 1484104 api_server.go:141] control plane version: v1.28.4
	I1225 13:27:23.276010 1484104 api_server.go:131] duration metric: took 5.01669783s to wait for apiserver health ...
	I1225 13:27:23.276024 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:27:23.276033 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:23.278354 1484104 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:23.279804 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:23.300762 1484104 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:23.326548 1484104 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:23.346826 1484104 system_pods.go:59] 8 kube-system pods found
	I1225 13:27:23.346871 1484104 system_pods.go:61] "coredns-5dd5756b68-l7qnn" [860c88a5-5bb9-4556-814a-08f1cc882c0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:23.346884 1484104 system_pods.go:61] "etcd-default-k8s-diff-port-344803" [eca3b322-fbba-4d8e-b8be-10b7f552bd32] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:27:23.346896 1484104 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-344803" [730b8b80-bf80-4769-b4cd-7e81b0600599] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:27:23.346908 1484104 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-344803" [8424df4f-e2d8-4f22-8593-21cf0ccc82eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:27:23.346965 1484104 system_pods.go:61] "kube-proxy-wnjn2" [ed9e8d7e-d237-46ab-84d1-a78f7f931aab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:27:23.346988 1484104 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-344803" [f865e5a4-4b21-4d15-a437-47965f0d1db8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:27:23.347009 1484104 system_pods.go:61] "metrics-server-57f55c9bc5-zgrj5" [d52789c5-dfe7-48e6-9dfd-a7dc5b5be6ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:27:23.347099 1484104 system_pods.go:61] "storage-provisioner" [96723fff-956b-42c4-864b-b18afb0c0285] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 13:27:23.347116 1484104 system_pods.go:74] duration metric: took 20.540773ms to wait for pod list to return data ...
	I1225 13:27:23.347135 1484104 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:23.358619 1484104 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:23.358673 1484104 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:23.358690 1484104 node_conditions.go:105] duration metric: took 11.539548ms to run NodePressure ...
	I1225 13:27:23.358716 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:23.795558 1484104 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:23.804103 1484104 kubeadm.go:787] kubelet initialised
	I1225 13:27:23.804125 1484104 kubeadm.go:788] duration metric: took 8.535185ms waiting for restarted kubelet to initialise ...
	I1225 13:27:23.804133 1484104 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:23.814199 1484104 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:20.557056 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:22.569215 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:25.054111 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:21.909021 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:21.912423 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:21.912802 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:21.912828 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:21.913199 1482618 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 13:27:21.917615 1482618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:21.931709 1482618 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1225 13:27:21.931830 1482618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:21.991133 1482618 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1225 13:27:21.991246 1482618 ssh_runner.go:195] Run: which lz4
	I1225 13:27:21.997721 1482618 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:27:22.003171 1482618 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:27:22.003218 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1225 13:27:23.975639 1482618 crio.go:444] Took 1.977982 seconds to copy over tarball
	I1225 13:27:23.975723 1482618 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:27:21.643721 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:24.132742 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:25.827617 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:28.322507 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:27.055526 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:29.558580 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:27.243294 1482618 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.267535049s)
	I1225 13:27:27.243339 1482618 crio.go:451] Took 3.267670 seconds to extract the tarball
	I1225 13:27:27.243368 1482618 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:27:27.285528 1482618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:27.338914 1482618 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1225 13:27:27.338948 1482618 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1225 13:27:27.339078 1482618 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.339115 1482618 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.339118 1482618 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.339160 1482618 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.339114 1482618 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.339054 1482618 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.339059 1482618 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1225 13:27:27.339060 1482618 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.340631 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.340647 1482618 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.340658 1482618 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.340632 1482618 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1225 13:27:27.340630 1482618 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.340666 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.340630 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.340635 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.502560 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.502567 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.510502 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1225 13:27:27.513052 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.518668 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.522882 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.553027 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.608178 1482618 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1225 13:27:27.608235 1482618 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.608294 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.655271 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.671173 1482618 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1225 13:27:27.671223 1482618 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1225 13:27:27.671283 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.671290 1482618 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1225 13:27:27.671330 1482618 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.671378 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.728043 1482618 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1225 13:27:27.728102 1482618 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.728139 1482618 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1225 13:27:27.728159 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.728187 1482618 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.728222 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.739034 1482618 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1225 13:27:27.739077 1482618 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.739133 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.739156 1482618 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1225 13:27:27.739205 1482618 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.739213 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.739261 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.858062 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1225 13:27:27.858089 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.858143 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.858175 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.858237 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.858301 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1225 13:27:27.858358 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:28.004051 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1225 13:27:28.004125 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1225 13:27:28.004183 1482618 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1225 13:27:28.004226 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1225 13:27:28.004304 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1225 13:27:28.004369 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1225 13:27:28.005012 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1225 13:27:28.009472 1482618 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1225 13:27:28.009491 1482618 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1225 13:27:28.009550 1482618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1225 13:27:29.560553 1482618 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.550970125s)
	I1225 13:27:29.560586 1482618 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1225 13:27:29.560668 1482618 cache_images.go:92] LoadImages completed in 2.22170407s
	W1225 13:27:29.560766 1482618 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1225 13:27:29.560846 1482618 ssh_runner.go:195] Run: crio config
	I1225 13:27:29.639267 1482618 cni.go:84] Creating CNI manager for ""
	I1225 13:27:29.639298 1482618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:29.639324 1482618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:27:29.639375 1482618 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-198979 NodeName:old-k8s-version-198979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1225 13:27:29.639598 1482618 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-198979"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-198979
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.186:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:27:29.639711 1482618 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-198979 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-198979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:27:29.639800 1482618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1225 13:27:29.649536 1482618 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:27:29.649614 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:27:29.658251 1482618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1225 13:27:29.678532 1482618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:27:29.698314 1482618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1225 13:27:29.718873 1482618 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I1225 13:27:29.723656 1482618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:29.737736 1482618 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979 for IP: 192.168.39.186
	I1225 13:27:29.737787 1482618 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:29.738006 1482618 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:27:29.738069 1482618 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:27:29.738147 1482618 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.key
	I1225 13:27:29.738211 1482618 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.key.d0691019
	I1225 13:27:29.738252 1482618 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.key
	I1225 13:27:29.738456 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:27:29.738501 1482618 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:27:29.738511 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:27:29.738543 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:27:29.738578 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:27:29.738617 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:27:29.738682 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:29.739444 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:27:29.765303 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:27:29.790702 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:27:29.818835 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 13:27:29.845659 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:27:29.872043 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:27:29.902732 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:27:29.928410 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:27:29.954350 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:27:29.978557 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:27:30.007243 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:27:30.036876 1482618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:27:30.055990 1482618 ssh_runner.go:195] Run: openssl version
	I1225 13:27:30.062813 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:27:30.075937 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.082034 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.082145 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.089645 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:27:30.102657 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:27:30.115701 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.120635 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.120711 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.128051 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:27:30.139465 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:27:30.151046 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.156574 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.156656 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.162736 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:27:30.174356 1482618 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:27:30.180962 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:27:30.187746 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:27:30.194481 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:27:30.202279 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:27:30.210555 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:27:30.218734 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:27:30.225325 1482618 kubeadm.go:404] StartCluster: {Name:old-k8s-version-198979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-198979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:27:30.225424 1482618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:27:30.225478 1482618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:30.274739 1482618 cri.go:89] found id: ""
	I1225 13:27:30.274842 1482618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:27:30.285949 1482618 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:27:30.285980 1482618 kubeadm.go:636] restartCluster start
	I1225 13:27:30.286051 1482618 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:27:30.295521 1482618 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:30.296804 1482618 kubeconfig.go:92] found "old-k8s-version-198979" server: "https://192.168.39.186:8443"
	I1225 13:27:30.299493 1482618 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:27:30.308641 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:30.308745 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:30.320654 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:26.631365 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:29.129943 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:31.131590 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:30.329682 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:31.824743 1484104 pod_ready.go:92] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:31.824770 1484104 pod_ready.go:81] duration metric: took 8.010540801s waiting for pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.824781 1484104 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.830321 1484104 pod_ready.go:92] pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:31.830347 1484104 pod_ready.go:81] duration metric: took 5.559816ms waiting for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.830358 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.338865 1484104 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:32.338898 1484104 pod_ready.go:81] duration metric: took 508.532498ms waiting for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.338913 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.846030 1484104 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:33.846054 1484104 pod_ready.go:81] duration metric: took 1.507133449s waiting for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.846065 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wnjn2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.851826 1484104 pod_ready.go:92] pod "kube-proxy-wnjn2" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:33.851846 1484104 pod_ready.go:81] duration metric: took 5.775207ms waiting for pod "kube-proxy-wnjn2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.851855 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.054359 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:34.054586 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:30.809359 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:30.809482 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:30.821194 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:31.308690 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:31.308830 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:31.322775 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:31.809511 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:31.809612 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:31.823928 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:32.309450 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:32.309569 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:32.320937 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:32.809587 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:32.809686 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:32.822957 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.308905 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:33.308992 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:33.321195 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.808702 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:33.808803 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:33.820073 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:34.309661 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:34.309760 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:34.322931 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:34.809599 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:34.809724 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:34.825650 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:35.308697 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:35.308798 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:35.321313 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.630973 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.128884 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:35.859839 1484104 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.359809 1484104 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:36.359838 1484104 pod_ready.go:81] duration metric: took 2.507975576s waiting for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:36.359853 1484104 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:38.371707 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.554699 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:39.053732 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:35.809083 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:35.809186 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:35.821434 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:36.309100 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:36.309181 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:36.322566 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:36.809026 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:36.809136 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:36.820791 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:37.309382 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:37.309501 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:37.321365 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:37.809397 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:37.809515 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:37.821538 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:38.309716 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:38.309819 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:38.321060 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:38.809627 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:38.809728 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:38.821784 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:39.309363 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:39.309483 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:39.320881 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:39.809420 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:39.809597 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:39.820752 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:40.308911 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:40.309009 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:40.322568 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:40.322614 1482618 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:27:40.322653 1482618 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:27:40.322670 1482618 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:27:40.322730 1482618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:40.366271 1482618 cri.go:89] found id: ""
	I1225 13:27:40.366365 1482618 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:27:40.383123 1482618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:27:40.392329 1482618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:27:40.392412 1482618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:40.401435 1482618 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:40.401471 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:38.131920 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.629516 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.868311 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:42.872952 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:41.054026 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:43.054332 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.538996 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.466467 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.697265 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.796796 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.898179 1482618 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:27:41.898290 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:42.398616 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:42.899373 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.399246 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.898788 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.923617 1482618 api_server.go:72] duration metric: took 2.025431683s to wait for apiserver process to appear ...
	I1225 13:27:43.923650 1482618 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:27:43.923684 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:42.632296 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.128501 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.368613 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:47.868011 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.054778 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:47.559938 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:48.924695 1482618 api_server.go:269] stopped: https://192.168.39.186:8443/healthz: Get "https://192.168.39.186:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 13:27:48.924755 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:49.954284 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:49.954379 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:49.954401 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:49.985515 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:49.985568 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:50.424616 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:50.431560 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1225 13:27:50.431604 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1225 13:27:50.924173 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:50.935578 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1225 13:27:50.935622 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1225 13:27:51.424341 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:51.431709 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1225 13:27:51.440822 1482618 api_server.go:141] control plane version: v1.16.0
	I1225 13:27:51.440855 1482618 api_server.go:131] duration metric: took 7.517198191s to wait for apiserver health ...
	I1225 13:27:51.440866 1482618 cni.go:84] Creating CNI manager for ""
	I1225 13:27:51.440873 1482618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:51.442446 1482618 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:47.130936 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:49.132275 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:51.443830 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:51.456628 1482618 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:51.477822 1482618 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:51.487046 1482618 system_pods.go:59] 7 kube-system pods found
	I1225 13:27:51.487082 1482618 system_pods.go:61] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:27:51.487087 1482618 system_pods.go:61] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:27:51.487091 1482618 system_pods.go:61] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:27:51.487096 1482618 system_pods.go:61] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Pending
	I1225 13:27:51.487100 1482618 system_pods.go:61] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:27:51.487103 1482618 system_pods.go:61] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:27:51.487107 1482618 system_pods.go:61] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:27:51.487113 1482618 system_pods.go:74] duration metric: took 9.266811ms to wait for pod list to return data ...
	I1225 13:27:51.487120 1482618 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:51.491782 1482618 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:51.491817 1482618 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:51.491831 1482618 node_conditions.go:105] duration metric: took 4.70597ms to run NodePressure ...
	I1225 13:27:51.491855 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:51.768658 1482618 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:51.776258 1482618 kubeadm.go:787] kubelet initialised
	I1225 13:27:51.776283 1482618 kubeadm.go:788] duration metric: took 7.588357ms waiting for restarted kubelet to initialise ...
	I1225 13:27:51.776293 1482618 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:51.784053 1482618 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.791273 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.791314 1482618 pod_ready.go:81] duration metric: took 7.223677ms waiting for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.791328 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.791338 1482618 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.801453 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "etcd-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.801491 1482618 pod_ready.go:81] duration metric: took 10.138221ms waiting for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.801505 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "etcd-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.801514 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.809536 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.809577 1482618 pod_ready.go:81] duration metric: took 8.051285ms waiting for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.809590 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.809608 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.882231 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.882268 1482618 pod_ready.go:81] duration metric: took 72.643349ms waiting for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.882299 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.882309 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:52.282486 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-proxy-vw9lf" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.282531 1482618 pod_ready.go:81] duration metric: took 400.208562ms waiting for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:52.282543 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-proxy-vw9lf" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.282552 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:52.689279 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.689329 1482618 pod_ready.go:81] duration metric: took 406.764819ms waiting for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:52.689343 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.689353 1482618 pod_ready.go:38] duration metric: took 913.049281ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:52.689387 1482618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:27:52.705601 1482618 ops.go:34] apiserver oom_adj: -16
	I1225 13:27:52.705628 1482618 kubeadm.go:640] restartCluster took 22.419638621s
	I1225 13:27:52.705639 1482618 kubeadm.go:406] StartCluster complete in 22.480335985s
	I1225 13:27:52.705663 1482618 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:52.705760 1482618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:27:52.708825 1482618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:52.709185 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:27:52.709313 1482618 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:27:52.709404 1482618 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709427 1482618 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-198979"
	W1225 13:27:52.709435 1482618 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:27:52.709443 1482618 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709460 1482618 config.go:182] Loaded profile config "old-k8s-version-198979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1225 13:27:52.709466 1482618 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709475 1482618 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-198979"
	I1225 13:27:52.709482 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.709488 1482618 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-198979"
	W1225 13:27:52.709502 1482618 addons.go:246] addon metrics-server should already be in state true
	I1225 13:27:52.709553 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.709914 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.709953 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.709964 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.709992 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.709965 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.710046 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.729360 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1225 13:27:52.730016 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.730343 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I1225 13:27:52.730527 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I1225 13:27:52.730777 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.730808 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.730852 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.731329 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.731365 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.731381 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.731589 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.731638 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.731715 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.732311 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.732360 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.732731 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.732763 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.733225 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.733787 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.733859 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.735675 1482618 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-198979"
	W1225 13:27:52.735694 1482618 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:27:52.735725 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.736079 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.736117 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.751072 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40177
	I1225 13:27:52.752097 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.753002 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.753022 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.753502 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.753741 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.756158 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.758410 1482618 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:52.758080 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42869
	I1225 13:27:52.759927 1482618 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:52.759942 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:27:52.759963 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.760521 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.761648 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.761665 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.762046 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.762823 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.762872 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.763974 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.764712 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.764748 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.764752 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.765009 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.765216 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I1225 13:27:52.765216 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.765461 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.791493 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.792265 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.792294 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.792795 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.793023 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.795238 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.799536 1482618 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:27:52.800892 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:27:52.800920 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:27:52.800955 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.804762 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.806571 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.806568 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.806606 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.806957 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.807115 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.807260 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.811419 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I1225 13:27:52.811816 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.812352 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.812379 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.812872 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.813083 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.814823 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.815122 1482618 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:52.815138 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:27:52.815158 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.818411 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.818892 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.818926 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.819253 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.819504 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.819705 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.819981 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.963144 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:52.974697 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:27:52.974733 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:27:53.021391 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:53.039959 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:27:53.039991 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:27:53.121390 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:53.121421 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:27:53.196232 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:53.256419 1482618 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-198979" context rescaled to 1 replicas
	I1225 13:27:53.256479 1482618 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:27:53.258366 1482618 out.go:177] * Verifying Kubernetes components...
	I1225 13:27:53.259807 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:27:53.276151 1482618 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:27:53.687341 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.687374 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.687666 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.687690 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.687701 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.687710 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.689261 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.689286 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.689294 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.725954 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.725985 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.726715 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.726737 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.726743 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.726776 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.726787 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.727040 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.727054 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.727061 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.744318 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.744356 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.744696 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.744745 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.846817 1482618 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-198979" to be "Ready" ...
	I1225 13:27:53.846878 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.846899 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.847234 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.847301 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.847317 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.847329 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.847351 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.847728 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.847767 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.847793 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.847810 1482618 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-198979"
	I1225 13:27:53.850107 1482618 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:27:49.870506 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:52.369916 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:50.056130 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:52.562555 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:53.851456 1482618 addons.go:508] enable addons completed in 1.14214354s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:27:51.635205 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:54.131852 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:54.868902 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:57.367267 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:59.368997 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:55.057522 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:57.555214 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:55.851206 1482618 node_ready.go:58] node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:58.350906 1482618 node_ready.go:58] node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:28:00.350892 1482618 node_ready.go:49] node "old-k8s-version-198979" has status "Ready":"True"
	I1225 13:28:00.350918 1482618 node_ready.go:38] duration metric: took 6.504066205s waiting for node "old-k8s-version-198979" to be "Ready" ...
	I1225 13:28:00.350928 1482618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:28:00.355882 1482618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.362249 1482618 pod_ready.go:92] pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.362281 1482618 pod_ready.go:81] duration metric: took 6.362168ms waiting for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.362290 1482618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.367738 1482618 pod_ready.go:92] pod "etcd-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.367777 1482618 pod_ready.go:81] duration metric: took 5.478984ms waiting for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.367790 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.373724 1482618 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.373754 1482618 pod_ready.go:81] duration metric: took 5.95479ms waiting for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.373774 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.380810 1482618 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.380841 1482618 pod_ready.go:81] duration metric: took 7.058206ms waiting for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.380854 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:56.635216 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:59.129464 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:01.132131 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:00.750612 1482618 pod_ready.go:92] pod "kube-proxy-vw9lf" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.750641 1482618 pod_ready.go:81] duration metric: took 369.779347ms waiting for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.750651 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:01.151567 1482618 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:01.151596 1482618 pod_ready.go:81] duration metric: took 400.937167ms waiting for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:01.151617 1482618 pod_ready.go:38] duration metric: took 800.677743ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:28:01.151634 1482618 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:28:01.151694 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:28:01.170319 1482618 api_server.go:72] duration metric: took 7.913795186s to wait for apiserver process to appear ...
	I1225 13:28:01.170349 1482618 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:28:01.170368 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:28:01.177133 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1225 13:28:01.178326 1482618 api_server.go:141] control plane version: v1.16.0
	I1225 13:28:01.178351 1482618 api_server.go:131] duration metric: took 7.994163ms to wait for apiserver health ...
	I1225 13:28:01.178361 1482618 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:28:01.352663 1482618 system_pods.go:59] 7 kube-system pods found
	I1225 13:28:01.352693 1482618 system_pods.go:61] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:28:01.352697 1482618 system_pods.go:61] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:28:01.352702 1482618 system_pods.go:61] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:28:01.352706 1482618 system_pods.go:61] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Running
	I1225 13:28:01.352710 1482618 system_pods.go:61] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:28:01.352714 1482618 system_pods.go:61] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:28:01.352718 1482618 system_pods.go:61] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:28:01.352724 1482618 system_pods.go:74] duration metric: took 174.35745ms to wait for pod list to return data ...
	I1225 13:28:01.352731 1482618 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:28:01.554095 1482618 default_sa.go:45] found service account: "default"
	I1225 13:28:01.554129 1482618 default_sa.go:55] duration metric: took 201.391529ms for default service account to be created ...
	I1225 13:28:01.554139 1482618 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:28:01.757666 1482618 system_pods.go:86] 7 kube-system pods found
	I1225 13:28:01.757712 1482618 system_pods.go:89] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:28:01.757724 1482618 system_pods.go:89] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:28:01.757731 1482618 system_pods.go:89] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:28:01.757747 1482618 system_pods.go:89] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Running
	I1225 13:28:01.757754 1482618 system_pods.go:89] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:28:01.757763 1482618 system_pods.go:89] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:28:01.757769 1482618 system_pods.go:89] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:28:01.757785 1482618 system_pods.go:126] duration metric: took 203.63938ms to wait for k8s-apps to be running ...
	I1225 13:28:01.757800 1482618 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:28:01.757863 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:28:01.771792 1482618 system_svc.go:56] duration metric: took 13.980705ms WaitForService to wait for kubelet.
	I1225 13:28:01.771821 1482618 kubeadm.go:581] duration metric: took 8.515309843s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:28:01.771843 1482618 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:28:01.952426 1482618 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:28:01.952463 1482618 node_conditions.go:123] node cpu capacity is 2
	I1225 13:28:01.952477 1482618 node_conditions.go:105] duration metric: took 180.629128ms to run NodePressure ...
	I1225 13:28:01.952493 1482618 start.go:228] waiting for startup goroutines ...
	I1225 13:28:01.952500 1482618 start.go:233] waiting for cluster config update ...
	I1225 13:28:01.952512 1482618 start.go:242] writing updated cluster config ...
	I1225 13:28:01.952974 1482618 ssh_runner.go:195] Run: rm -f paused
	I1225 13:28:02.007549 1482618 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I1225 13:28:02.009559 1482618 out.go:177] 
	W1225 13:28:02.011242 1482618 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I1225 13:28:02.012738 1482618 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1225 13:28:02.014029 1482618 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-198979" cluster and "default" namespace by default
	I1225 13:28:01.869370 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:04.368824 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:00.055713 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:02.553981 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:04.554824 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:03.629358 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:06.130616 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:06.869993 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:09.367869 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:07.054835 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:09.554904 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:08.130786 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:10.632435 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:11.368789 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:13.867665 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:12.054007 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:14.554676 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:13.129854 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:15.628997 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:15.869048 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:18.368070 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:16.557633 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:19.054486 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:17.629072 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:20.129902 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:20.868173 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:22.868637 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:21.555027 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:24.054858 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:22.133148 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:24.630133 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:25.369437 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:27.870029 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:26.056198 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:28.555876 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:27.129583 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:29.629963 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:30.367773 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:32.368497 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:34.369791 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:31.053212 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:33.054315 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:32.128310 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:34.130650 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:36.869325 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:39.367488 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:35.056761 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:37.554917 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:36.632857 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:39.129518 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:41.368425 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:43.868157 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:40.054854 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:42.555015 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:45.053900 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:41.630558 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:44.132072 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:46.366422 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:48.368331 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:47.056378 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:49.555186 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:46.629415 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:49.129249 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:51.129692 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:50.868321 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:53.366805 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:52.053785 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:54.057533 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:53.629427 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:55.629652 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:55.368197 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:57.867659 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.868187 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:56.556558 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.055474 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:57.629912 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.630858 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:01.868360 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:03.870936 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:01.555132 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:04.053887 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:02.127901 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:04.131186 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.367634 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:08.867571 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.054546 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:08.554559 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.629995 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:09.129898 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:10.868677 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:12.868979 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:11.055554 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:13.554637 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:11.629511 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:14.129806 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:14.872549 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:17.371705 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:19.868438 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:16.054016 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:18.055476 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:16.629688 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:18.630125 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:21.132102 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:22.367525 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:24.369464 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:20.554660 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:22.556044 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:25.054213 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:23.630061 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:26.132281 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:26.868977 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:29.367384 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:27.055844 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:29.554124 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:28.630474 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:30.631070 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:31.367691 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:33.867941 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:31.555167 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:33.557066 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:32.634599 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:35.131402 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:36.369081 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:38.868497 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:36.054764 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:38.054975 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:37.629895 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:39.630456 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:41.366745 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:43.367883 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:40.554998 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:42.555257 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:42.130638 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:44.629851 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:45.371692 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:47.866965 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:49.868100 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:45.057506 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:47.555247 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:46.632874 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:49.129782 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:51.130176 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:51.868818 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:53.868968 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:50.055939 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:52.556609 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:55.054048 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:53.132556 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:55.632608 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:56.368065 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:58.868076 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:57.054224 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:59.554940 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:58.128545 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:00.129437 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:00.868364 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:03.368093 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:02.054215 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:04.056019 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:02.129706 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:04.130092 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:05.867992 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:07.872121 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:06.554889 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:09.056197 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:06.630974 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:08.632171 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:11.128952 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:10.367536 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:12.369331 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:11.554738 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:13.555681 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:13.129878 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:15.130470 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:14.868630 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:17.367768 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:19.368295 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:16.054391 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:18.054606 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:17.630479 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:19.630971 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:21.873194 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:24.368931 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:20.054866 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:22.554974 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:25.053696 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:22.130831 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:24.630755 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:26.867555 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:28.868612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:27.054706 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:29.055614 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:27.133840 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:29.630572 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:30.868716 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:33.369710 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:31.554882 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:33.556367 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:32.129865 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:34.129987 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:35.870671 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:38.367237 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:35.557755 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:37.559481 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:36.630513 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:39.130271 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:40.368072 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:42.869043 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:40.055427 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:42.554787 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:45.053876 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:41.629178 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:43.630237 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:45.631199 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:44.873439 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:47.367548 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:49.368066 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:47.555106 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:49.556132 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:48.130206 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:50.629041 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:51.369311 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:53.870853 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:52.055511 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:54.061135 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:52.630215 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:55.130153 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:55.873755 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:58.367682 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:56.554861 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:59.054344 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:57.629571 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:59.630560 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:00.372506 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:02.867084 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:01.554332 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:03.554717 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.555955 1483118 pod_ready.go:81] duration metric: took 4m0.009196678s waiting for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:04.555987 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:31:04.555994 1483118 pod_ready.go:38] duration metric: took 4m2.890580557s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:04.556014 1483118 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:31:04.556050 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:04.556152 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:04.615717 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:04.615748 1483118 cri.go:89] found id: ""
	I1225 13:31:04.615759 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:04.615830 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.621669 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:04.621778 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:04.661088 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:04.661127 1483118 cri.go:89] found id: ""
	I1225 13:31:04.661139 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:04.661191 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.666410 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:04.666496 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:04.710927 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:04.710962 1483118 cri.go:89] found id: ""
	I1225 13:31:04.710973 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:04.711041 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.715505 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:04.715587 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:04.761494 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:04.761518 1483118 cri.go:89] found id: ""
	I1225 13:31:04.761527 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:04.761580 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.766925 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:04.767015 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:04.810640 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:04.810670 1483118 cri.go:89] found id: ""
	I1225 13:31:04.810685 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:04.810753 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.815190 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:04.815285 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:04.858275 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:04.858301 1483118 cri.go:89] found id: ""
	I1225 13:31:04.858309 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:04.858362 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.863435 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:04.863529 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:04.914544 1483118 cri.go:89] found id: ""
	I1225 13:31:04.914583 1483118 logs.go:284] 0 containers: []
	W1225 13:31:04.914594 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:04.914603 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:04.914675 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:04.969548 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:04.969577 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:04.969584 1483118 cri.go:89] found id: ""
	I1225 13:31:04.969594 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:04.969660 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.974172 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.978956 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:04.978989 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:05.033590 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:05.033632 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:02.133447 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.630226 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.869025 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:07.368392 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:09.369061 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:05.085851 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:05.085879 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:05.144002 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:05.144047 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:05.191669 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:05.191703 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:05.238581 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:05.238617 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:05.253236 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:05.253271 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:05.293626 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:05.293674 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:05.338584 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:05.338622 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:05.381135 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:05.381172 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:05.886860 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:05.886918 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:06.045040 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:06.045080 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:06.101152 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:06.101192 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:08.662518 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:31:08.678649 1483118 api_server.go:72] duration metric: took 4m14.820531999s to wait for apiserver process to appear ...
	I1225 13:31:08.678687 1483118 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:31:08.678729 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:08.678791 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:08.718202 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:08.718246 1483118 cri.go:89] found id: ""
	I1225 13:31:08.718255 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:08.718305 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.723089 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:08.723177 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:08.772619 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:08.772641 1483118 cri.go:89] found id: ""
	I1225 13:31:08.772649 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:08.772709 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.777577 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:08.777669 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:08.818869 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:08.818900 1483118 cri.go:89] found id: ""
	I1225 13:31:08.818910 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:08.818970 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.823301 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:08.823382 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:08.868885 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:08.868913 1483118 cri.go:89] found id: ""
	I1225 13:31:08.868924 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:08.868982 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.873489 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:08.873562 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:08.916925 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:08.916957 1483118 cri.go:89] found id: ""
	I1225 13:31:08.916967 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:08.917065 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.921808 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:08.921901 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:08.961586 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:08.961617 1483118 cri.go:89] found id: ""
	I1225 13:31:08.961628 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:08.961707 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.965986 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:08.966096 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:09.012223 1483118 cri.go:89] found id: ""
	I1225 13:31:09.012262 1483118 logs.go:284] 0 containers: []
	W1225 13:31:09.012270 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:09.012278 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:09.012343 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:09.060646 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:09.060675 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:09.060683 1483118 cri.go:89] found id: ""
	I1225 13:31:09.060694 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:09.060767 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:09.065955 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:09.070859 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:09.070890 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:09.128056 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:09.128096 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:09.179304 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:09.179341 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:09.194019 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:09.194048 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:09.339697 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:09.339743 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:09.389626 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:09.389669 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:09.831437 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:09.831498 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:09.888799 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:09.888848 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:09.932201 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:09.932232 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:09.983201 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:09.983242 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:10.039094 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:10.039149 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:06.630567 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:09.130605 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:11.369445 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:13.870404 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:10.095628 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:10.095677 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:10.139678 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:10.139717 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:12.688297 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:31:12.693469 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 200:
	ok
	I1225 13:31:12.694766 1483118 api_server.go:141] control plane version: v1.29.0-rc.2
	I1225 13:31:12.694788 1483118 api_server.go:131] duration metric: took 4.016094906s to wait for apiserver health ...
	I1225 13:31:12.694796 1483118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:31:12.694821 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:12.694876 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:12.743143 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:12.743174 1483118 cri.go:89] found id: ""
	I1225 13:31:12.743185 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:12.743238 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.747708 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:12.747803 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:12.800511 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:12.800540 1483118 cri.go:89] found id: ""
	I1225 13:31:12.800549 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:12.800612 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.805236 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:12.805308 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:12.850047 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:12.850081 1483118 cri.go:89] found id: ""
	I1225 13:31:12.850092 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:12.850152 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.854516 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:12.854602 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:12.902131 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:12.902162 1483118 cri.go:89] found id: ""
	I1225 13:31:12.902173 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:12.902239 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.907546 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:12.907634 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:12.966561 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:12.966590 1483118 cri.go:89] found id: ""
	I1225 13:31:12.966601 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:12.966674 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.971071 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:12.971161 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:13.026823 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:13.026851 1483118 cri.go:89] found id: ""
	I1225 13:31:13.026862 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:13.026927 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.031499 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:13.031576 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:13.077486 1483118 cri.go:89] found id: ""
	I1225 13:31:13.077512 1483118 logs.go:284] 0 containers: []
	W1225 13:31:13.077520 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:13.077526 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:13.077589 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:13.130262 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:13.130287 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:13.130294 1483118 cri.go:89] found id: ""
	I1225 13:31:13.130305 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:13.130364 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.138345 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.142749 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:13.142780 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:13.264652 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:13.264694 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:13.315138 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:13.315182 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:13.375532 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:13.375570 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:13.418188 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:13.418226 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:13.433392 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:13.433423 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:13.472447 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:13.472481 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:13.514578 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:13.514631 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:13.568962 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:13.569001 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:13.609819 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:13.609864 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:13.668114 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:13.668160 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:13.710116 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:13.710155 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:14.068484 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:14.068548 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:11.629829 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:13.632277 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:15.629964 1483946 pod_ready.go:81] duration metric: took 4m0.008391697s waiting for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:15.629997 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:31:15.630006 1483946 pod_ready.go:38] duration metric: took 4m4.430454443s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:15.630022 1483946 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:31:15.630052 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:15.630113 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:15.694629 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:15.694654 1483946 cri.go:89] found id: ""
	I1225 13:31:15.694666 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:15.694735 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.699777 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:15.699847 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:15.744267 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:15.744299 1483946 cri.go:89] found id: ""
	I1225 13:31:15.744308 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:15.744361 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.749213 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:15.749310 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:15.796903 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:15.796930 1483946 cri.go:89] found id: ""
	I1225 13:31:15.796939 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:15.797001 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.801601 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:15.801673 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:15.841792 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:15.841820 1483946 cri.go:89] found id: ""
	I1225 13:31:15.841830 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:15.841902 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.845893 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:15.845970 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:15.901462 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:15.901493 1483946 cri.go:89] found id: ""
	I1225 13:31:15.901505 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:15.901589 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.907173 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:15.907264 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:15.957143 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:15.957177 1483946 cri.go:89] found id: ""
	I1225 13:31:15.957186 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:15.957239 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.962715 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:15.962789 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:16.007949 1483946 cri.go:89] found id: ""
	I1225 13:31:16.007988 1483946 logs.go:284] 0 containers: []
	W1225 13:31:16.007999 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:16.008008 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:16.008076 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:16.063958 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:16.063984 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:16.063989 1483946 cri.go:89] found id: ""
	I1225 13:31:16.063997 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:16.064052 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:16.069193 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:16.074310 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:16.074333 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:16.120318 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:16.120363 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:16.176217 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:16.176264 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:16.633470 1483118 system_pods.go:59] 8 kube-system pods found
	I1225 13:31:16.633507 1483118 system_pods.go:61] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running
	I1225 13:31:16.633512 1483118 system_pods.go:61] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running
	I1225 13:31:16.633516 1483118 system_pods.go:61] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running
	I1225 13:31:16.633521 1483118 system_pods.go:61] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running
	I1225 13:31:16.633525 1483118 system_pods.go:61] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running
	I1225 13:31:16.633529 1483118 system_pods.go:61] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running
	I1225 13:31:16.633536 1483118 system_pods.go:61] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:16.633541 1483118 system_pods.go:61] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running
	I1225 13:31:16.633548 1483118 system_pods.go:74] duration metric: took 3.938745899s to wait for pod list to return data ...
	I1225 13:31:16.633556 1483118 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:31:16.637279 1483118 default_sa.go:45] found service account: "default"
	I1225 13:31:16.637314 1483118 default_sa.go:55] duration metric: took 3.749637ms for default service account to be created ...
	I1225 13:31:16.637325 1483118 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:31:16.644466 1483118 system_pods.go:86] 8 kube-system pods found
	I1225 13:31:16.644501 1483118 system_pods.go:89] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running
	I1225 13:31:16.644509 1483118 system_pods.go:89] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running
	I1225 13:31:16.644516 1483118 system_pods.go:89] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running
	I1225 13:31:16.644523 1483118 system_pods.go:89] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running
	I1225 13:31:16.644530 1483118 system_pods.go:89] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running
	I1225 13:31:16.644536 1483118 system_pods.go:89] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running
	I1225 13:31:16.644547 1483118 system_pods.go:89] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:16.644558 1483118 system_pods.go:89] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running
	I1225 13:31:16.644583 1483118 system_pods.go:126] duration metric: took 7.250639ms to wait for k8s-apps to be running ...
	I1225 13:31:16.644594 1483118 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:31:16.644658 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:16.661680 1483118 system_svc.go:56] duration metric: took 17.070893ms WaitForService to wait for kubelet.
	I1225 13:31:16.661723 1483118 kubeadm.go:581] duration metric: took 4m22.80360778s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:31:16.661754 1483118 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:31:16.666189 1483118 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:31:16.666227 1483118 node_conditions.go:123] node cpu capacity is 2
	I1225 13:31:16.666294 1483118 node_conditions.go:105] duration metric: took 4.531137ms to run NodePressure ...
	I1225 13:31:16.666313 1483118 start.go:228] waiting for startup goroutines ...
	I1225 13:31:16.666323 1483118 start.go:233] waiting for cluster config update ...
	I1225 13:31:16.666338 1483118 start.go:242] writing updated cluster config ...
	I1225 13:31:16.666702 1483118 ssh_runner.go:195] Run: rm -f paused
	I1225 13:31:16.729077 1483118 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I1225 13:31:16.732824 1483118 out.go:177] * Done! kubectl is now configured to use "no-preload-330063" cluster and "default" namespace by default
	I1225 13:31:16.368392 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:18.374788 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:16.686611 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:16.686650 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:16.748667 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:16.748705 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:16.937661 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:16.937700 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:16.988870 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:16.988908 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:17.048278 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:17.048316 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:17.095857 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:17.095900 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:17.135425 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:17.135460 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:17.197626 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:17.197670 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:17.213658 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:17.213695 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:17.282101 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:17.282149 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:19.824939 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:31:19.840944 1483946 api_server.go:72] duration metric: took 4m11.866743679s to wait for apiserver process to appear ...
	I1225 13:31:19.840985 1483946 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:31:19.841036 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:19.841114 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:19.895404 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:19.895445 1483946 cri.go:89] found id: ""
	I1225 13:31:19.895455 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:19.895519 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.900604 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:19.900686 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:19.943623 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:19.943652 1483946 cri.go:89] found id: ""
	I1225 13:31:19.943662 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:19.943728 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.948230 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:19.948298 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:19.993271 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:19.993296 1483946 cri.go:89] found id: ""
	I1225 13:31:19.993304 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:19.993355 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.997702 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:19.997790 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:20.043487 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:20.043514 1483946 cri.go:89] found id: ""
	I1225 13:31:20.043525 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:20.043591 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.047665 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:20.047748 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:20.091832 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:20.091867 1483946 cri.go:89] found id: ""
	I1225 13:31:20.091878 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:20.091947 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.096400 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:20.096463 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:20.136753 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:20.136785 1483946 cri.go:89] found id: ""
	I1225 13:31:20.136794 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:20.136867 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.141479 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:20.141559 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:20.184635 1483946 cri.go:89] found id: ""
	I1225 13:31:20.184677 1483946 logs.go:284] 0 containers: []
	W1225 13:31:20.184688 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:20.184694 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:20.184770 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:20.231891 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:20.231918 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:20.231923 1483946 cri.go:89] found id: ""
	I1225 13:31:20.231932 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:20.231991 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.236669 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.240776 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:20.240804 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:20.305411 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:20.305479 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:20.376688 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:20.376729 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:20.419016 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:20.419060 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:20.465253 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:20.465288 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:20.505949 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:20.505994 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:20.565939 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:20.565995 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:20.608765 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:20.608798 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:20.646031 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:20.646076 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:20.694772 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:20.694812 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:20.710038 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:20.710074 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:20.841944 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:20.841996 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:21.267824 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:21.267884 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:20.869158 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:22.870463 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:23.834749 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:31:23.840763 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 200:
	ok
	I1225 13:31:23.842396 1483946 api_server.go:141] control plane version: v1.28.4
	I1225 13:31:23.842424 1483946 api_server.go:131] duration metric: took 4.001431078s to wait for apiserver health ...
	I1225 13:31:23.842451 1483946 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:31:23.842481 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:23.842535 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:23.901377 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:23.901409 1483946 cri.go:89] found id: ""
	I1225 13:31:23.901420 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:23.901489 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:23.906312 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:23.906382 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:23.957073 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:23.957105 1483946 cri.go:89] found id: ""
	I1225 13:31:23.957115 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:23.957175 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:23.961899 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:23.961968 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:24.009529 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:24.009575 1483946 cri.go:89] found id: ""
	I1225 13:31:24.009587 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:24.009656 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.014579 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:24.014668 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:24.059589 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:24.059618 1483946 cri.go:89] found id: ""
	I1225 13:31:24.059629 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:24.059698 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.065185 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:24.065265 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:24.123904 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:24.123932 1483946 cri.go:89] found id: ""
	I1225 13:31:24.123942 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:24.124006 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.128753 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:24.128849 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:24.172259 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:24.172285 1483946 cri.go:89] found id: ""
	I1225 13:31:24.172296 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:24.172363 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.177276 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:24.177356 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:24.223415 1483946 cri.go:89] found id: ""
	I1225 13:31:24.223445 1483946 logs.go:284] 0 containers: []
	W1225 13:31:24.223453 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:24.223459 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:24.223516 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:24.267840 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:24.267866 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:24.267870 1483946 cri.go:89] found id: ""
	I1225 13:31:24.267878 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:24.267939 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.272947 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.279183 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:24.279213 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:24.343548 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:24.343592 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:24.398275 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:24.398312 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:24.443435 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:24.443472 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:24.814711 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:24.814770 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:24.828613 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:24.828649 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:24.979501 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:24.979538 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:25.028976 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:25.029011 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:25.083148 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:25.083191 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:25.155284 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:25.155336 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:25.213437 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:25.213483 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:25.260934 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:25.260973 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:25.307395 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:25.307430 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:27.884673 1483946 system_pods.go:59] 8 kube-system pods found
	I1225 13:31:27.884702 1483946 system_pods.go:61] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running
	I1225 13:31:27.884708 1483946 system_pods.go:61] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running
	I1225 13:31:27.884713 1483946 system_pods.go:61] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running
	I1225 13:31:27.884717 1483946 system_pods.go:61] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running
	I1225 13:31:27.884721 1483946 system_pods.go:61] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running
	I1225 13:31:27.884725 1483946 system_pods.go:61] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running
	I1225 13:31:27.884731 1483946 system_pods.go:61] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:27.884737 1483946 system_pods.go:61] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:31:27.884744 1483946 system_pods.go:74] duration metric: took 4.04228589s to wait for pod list to return data ...
	I1225 13:31:27.884752 1483946 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:31:27.889125 1483946 default_sa.go:45] found service account: "default"
	I1225 13:31:27.889156 1483946 default_sa.go:55] duration metric: took 4.397454ms for default service account to be created ...
	I1225 13:31:27.889167 1483946 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:31:27.896851 1483946 system_pods.go:86] 8 kube-system pods found
	I1225 13:31:27.896879 1483946 system_pods.go:89] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running
	I1225 13:31:27.896884 1483946 system_pods.go:89] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running
	I1225 13:31:27.896889 1483946 system_pods.go:89] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running
	I1225 13:31:27.896894 1483946 system_pods.go:89] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running
	I1225 13:31:27.896898 1483946 system_pods.go:89] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running
	I1225 13:31:27.896901 1483946 system_pods.go:89] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running
	I1225 13:31:27.896908 1483946 system_pods.go:89] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:27.896912 1483946 system_pods.go:89] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:31:27.896920 1483946 system_pods.go:126] duration metric: took 7.747348ms to wait for k8s-apps to be running ...
	I1225 13:31:27.896929 1483946 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:31:27.896981 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:27.917505 1483946 system_svc.go:56] duration metric: took 20.559839ms WaitForService to wait for kubelet.
	I1225 13:31:27.917542 1483946 kubeadm.go:581] duration metric: took 4m19.94335169s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:31:27.917568 1483946 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:31:27.921689 1483946 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:31:27.921715 1483946 node_conditions.go:123] node cpu capacity is 2
	I1225 13:31:27.921797 1483946 node_conditions.go:105] duration metric: took 4.219723ms to run NodePressure ...
	I1225 13:31:27.921814 1483946 start.go:228] waiting for startup goroutines ...
	I1225 13:31:27.921825 1483946 start.go:233] waiting for cluster config update ...
	I1225 13:31:27.921838 1483946 start.go:242] writing updated cluster config ...
	I1225 13:31:27.922130 1483946 ssh_runner.go:195] Run: rm -f paused
	I1225 13:31:27.976011 1483946 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 13:31:27.978077 1483946 out.go:177] * Done! kubectl is now configured to use "embed-certs-880612" cluster and "default" namespace by default
	I1225 13:31:24.870628 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:26.873379 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:29.367512 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:31.367730 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:33.867551 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:36.360292 1484104 pod_ready.go:81] duration metric: took 4m0.000407846s waiting for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:36.360349 1484104 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" (will not retry!)
	I1225 13:31:36.360378 1484104 pod_ready.go:38] duration metric: took 4m12.556234617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:36.360445 1484104 kubeadm.go:640] restartCluster took 4m32.941510355s
	W1225 13:31:36.360540 1484104 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1225 13:31:36.360578 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1225 13:31:50.552320 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.191703988s)
	I1225 13:31:50.552417 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:50.569621 1484104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:31:50.581050 1484104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:31:50.591777 1484104 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:31:50.591837 1484104 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1225 13:31:50.651874 1484104 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1225 13:31:50.651952 1484104 kubeadm.go:322] [preflight] Running pre-flight checks
	I1225 13:31:50.822009 1484104 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 13:31:50.822174 1484104 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 13:31:50.822258 1484104 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1225 13:31:51.074237 1484104 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 13:31:51.077463 1484104 out.go:204]   - Generating certificates and keys ...
	I1225 13:31:51.077575 1484104 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1225 13:31:51.077637 1484104 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1225 13:31:51.077703 1484104 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1225 13:31:51.077755 1484104 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1225 13:31:51.077816 1484104 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1225 13:31:51.077908 1484104 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1225 13:31:51.078059 1484104 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1225 13:31:51.078715 1484104 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1225 13:31:51.079408 1484104 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1225 13:31:51.080169 1484104 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1225 13:31:51.080635 1484104 kubeadm.go:322] [certs] Using the existing "sa" key
	I1225 13:31:51.080724 1484104 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 13:31:51.147373 1484104 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 13:31:51.298473 1484104 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 13:31:51.403869 1484104 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 13:31:51.719828 1484104 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 13:31:51.720523 1484104 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 13:31:51.725276 1484104 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 13:31:51.727100 1484104 out.go:204]   - Booting up control plane ...
	I1225 13:31:51.727248 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 13:31:51.727343 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 13:31:51.727431 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 13:31:51.745500 1484104 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 13:31:51.746331 1484104 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 13:31:51.746392 1484104 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1225 13:31:51.897052 1484104 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1225 13:32:00.401261 1484104 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504339 seconds
	I1225 13:32:00.401463 1484104 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 13:32:00.422010 1484104 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 13:32:00.962174 1484104 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 13:32:00.962418 1484104 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-344803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 13:32:01.479956 1484104 kubeadm.go:322] [bootstrap-token] Using token: 7n7qlp.3wejtqrgqunjtf8y
	I1225 13:32:01.481699 1484104 out.go:204]   - Configuring RBAC rules ...
	I1225 13:32:01.481862 1484104 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 13:32:01.489709 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 13:32:01.499287 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 13:32:01.504520 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 13:32:01.508950 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 13:32:01.517277 1484104 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 13:32:01.537420 1484104 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 13:32:01.820439 1484104 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1225 13:32:01.897010 1484104 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1225 13:32:01.897039 1484104 kubeadm.go:322] 
	I1225 13:32:01.897139 1484104 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1225 13:32:01.897169 1484104 kubeadm.go:322] 
	I1225 13:32:01.897259 1484104 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1225 13:32:01.897270 1484104 kubeadm.go:322] 
	I1225 13:32:01.897292 1484104 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1225 13:32:01.897383 1484104 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 13:32:01.897471 1484104 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 13:32:01.897484 1484104 kubeadm.go:322] 
	I1225 13:32:01.897558 1484104 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1225 13:32:01.897568 1484104 kubeadm.go:322] 
	I1225 13:32:01.897621 1484104 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 13:32:01.897629 1484104 kubeadm.go:322] 
	I1225 13:32:01.897702 1484104 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1225 13:32:01.897822 1484104 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 13:32:01.897923 1484104 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 13:32:01.897935 1484104 kubeadm.go:322] 
	I1225 13:32:01.898040 1484104 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 13:32:01.898141 1484104 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1225 13:32:01.898156 1484104 kubeadm.go:322] 
	I1225 13:32:01.898264 1484104 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 7n7qlp.3wejtqrgqunjtf8y \
	I1225 13:32:01.898455 1484104 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 \
	I1225 13:32:01.898506 1484104 kubeadm.go:322] 	--control-plane 
	I1225 13:32:01.898516 1484104 kubeadm.go:322] 
	I1225 13:32:01.898627 1484104 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1225 13:32:01.898645 1484104 kubeadm.go:322] 
	I1225 13:32:01.898760 1484104 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 7n7qlp.3wejtqrgqunjtf8y \
	I1225 13:32:01.898898 1484104 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 13:32:01.899552 1484104 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 13:32:01.899699 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:32:01.899720 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:32:01.902817 1484104 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:32:01.904375 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:32:01.943752 1484104 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:32:02.004751 1484104 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:32:02.004915 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da minikube.k8s.io/name=default-k8s-diff-port-344803 minikube.k8s.io/updated_at=2023_12_25T13_32_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.004920 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.377800 1484104 ops.go:34] apiserver oom_adj: -16
	I1225 13:32:02.378388 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.879083 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:03.379453 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:03.878676 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:04.378589 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:04.878630 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:05.378615 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:05.879009 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:06.379100 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:06.878610 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:07.378604 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:07.878597 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:08.379427 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:08.878637 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:09.378638 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:09.879200 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:10.378659 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:10.879285 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:11.378603 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:11.878605 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:12.379451 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:12.879431 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:13.379034 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:13.878468 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:14.378592 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:14.878569 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:15.008581 1484104 kubeadm.go:1088] duration metric: took 13.00372954s to wait for elevateKubeSystemPrivileges.
	I1225 13:32:15.008626 1484104 kubeadm.go:406] StartCluster complete in 5m11.652335467s
	I1225 13:32:15.008653 1484104 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:32:15.008763 1484104 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:32:15.011655 1484104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:32:15.011982 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:32:15.012172 1484104 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:32:15.012258 1484104 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012285 1484104 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.012297 1484104 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:32:15.012311 1484104 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012347 1484104 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-344803"
	I1225 13:32:15.012363 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.012798 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.012800 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.012831 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.012833 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.012898 1484104 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012912 1484104 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.012919 1484104 addons.go:246] addon metrics-server should already be in state true
	I1225 13:32:15.012961 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.012972 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:32:15.013289 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.013318 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.032424 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I1225 13:32:15.032981 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44439
	I1225 13:32:15.033180 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1225 13:32:15.033455 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.033575 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.033623 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.034052 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034069 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034173 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034195 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034209 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034238 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034412 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034635 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034693 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034728 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.036190 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.036205 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.036228 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.036229 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.040383 1484104 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.040442 1484104 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:32:15.040473 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.040780 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.040820 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.055366 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
	I1225 13:32:15.055979 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.056596 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.056623 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.056646 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I1225 13:32:15.056646 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41689
	I1225 13:32:15.057067 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.057205 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.057218 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.057413 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.057741 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.057768 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.057958 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.058013 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.058122 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.058413 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.058776 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.058816 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.059142 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.059588 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.061854 1484104 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:32:15.060849 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.063569 1484104 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:32:15.063593 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:32:15.065174 1484104 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:32:15.063622 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.066654 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:32:15.066677 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:32:15.066700 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.071209 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.071244 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.071995 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.072039 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.072074 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.072089 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.072244 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.072319 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.072500 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.072558 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.072875 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.072941 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.073085 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.073138 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.077927 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38519
	I1225 13:32:15.078428 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.079241 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.079262 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.079775 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.079983 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.081656 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.082002 1484104 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:32:15.082024 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:32:15.082047 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.085367 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.085779 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.085805 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.086119 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.086390 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.086656 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.086875 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.262443 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:32:15.262470 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:32:15.270730 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 13:32:15.285178 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:32:15.302070 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:32:15.302097 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:32:15.303686 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:32:15.373021 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:32:15.373054 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:32:15.461862 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:32:15.518928 1484104 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-344803" context rescaled to 1 replicas
	I1225 13:32:15.518973 1484104 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:32:15.520858 1484104 out.go:177] * Verifying Kubernetes components...
	I1225 13:32:15.522326 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:32:16.993620 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.72284687s)
	I1225 13:32:16.993667 1484104 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1225 13:32:17.329206 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.025471574s)
	I1225 13:32:17.329305 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329321 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329352 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.044135646s)
	I1225 13:32:17.329411 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329430 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329697 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.329722 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.329737 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329747 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.329764 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329740 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.329805 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.329825 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329838 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.331647 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.331675 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.331706 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.331715 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.331734 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.331766 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.350031 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.350068 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.350458 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.350499 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.350516 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.582723 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.120815372s)
	I1225 13:32:17.582785 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.582798 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.582787 1484104 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.060422325s)
	I1225 13:32:17.582838 1484104 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-344803" to be "Ready" ...
	I1225 13:32:17.583145 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.583172 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.583179 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.583192 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.583201 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.583438 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.583461 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.583471 1484104 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-344803"
	I1225 13:32:17.585288 1484104 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:32:17.586537 1484104 addons.go:508] enable addons completed in 2.574365441s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:32:17.595130 1484104 node_ready.go:49] node "default-k8s-diff-port-344803" has status "Ready":"True"
	I1225 13:32:17.595165 1484104 node_ready.go:38] duration metric: took 12.307997ms waiting for node "default-k8s-diff-port-344803" to be "Ready" ...
	I1225 13:32:17.595181 1484104 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:32:17.613099 1484104 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:19.621252 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:20.621494 1484104 pod_ready.go:92] pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.621519 1484104 pod_ready.go:81] duration metric: took 3.008379569s waiting for pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.621528 1484104 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.630348 1484104 pod_ready.go:92] pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.630375 1484104 pod_ready.go:81] duration metric: took 8.841316ms waiting for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.630387 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.636928 1484104 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.636953 1484104 pod_ready.go:81] duration metric: took 6.558203ms waiting for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.636963 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.643335 1484104 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.643360 1484104 pod_ready.go:81] duration metric: took 6.390339ms waiting for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.643369 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpk9s" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.649496 1484104 pod_ready.go:92] pod "kube-proxy-fpk9s" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.649526 1484104 pod_ready.go:81] duration metric: took 6.150243ms waiting for pod "kube-proxy-fpk9s" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.649535 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:21.018065 1484104 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:21.018092 1484104 pod_ready.go:81] duration metric: took 368.549291ms waiting for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:21.018102 1484104 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:23.026953 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:25.525822 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:27.530780 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:30.033601 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:32.528694 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:34.529208 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:37.028717 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:39.526632 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:42.026868 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:44.028002 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:46.526534 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:48.529899 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:51.026062 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:53.525655 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:55.526096 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:58.026355 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:00.026674 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:02.029299 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:04.526609 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:06.526810 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:09.026498 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:11.026612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:13.029416 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:15.526242 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:18.026664 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:20.529125 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:23.026694 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:25.029350 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:27.527537 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:30.030562 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:32.526381 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:34.526801 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:37.027939 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:39.526249 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:41.526511 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:43.526783 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:45.527693 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:48.026703 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:50.027582 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:52.526290 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:55.027458 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:57.526559 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:59.526699 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:01.527938 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:03.529353 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:06.025942 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:08.027340 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:10.028087 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:12.525688 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:14.527122 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:16.529380 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:19.026128 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:21.026183 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:23.027208 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:25.526282 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:27.531847 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:30.030025 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:32.526291 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:34.526470 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:36.527179 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:39.026270 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:41.029609 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:43.528905 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:46.026666 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:48.528560 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:51.025864 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:53.027211 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:55.527359 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:58.025696 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:00.027368 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:02.027605 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:04.525836 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:06.526571 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:08.528550 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:11.026765 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:13.028215 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:15.525903 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:17.527102 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:20.026011 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:22.525873 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:24.528380 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:27.026402 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:29.527869 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:32.026671 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:34.026737 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:36.026836 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:38.526788 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:41.027387 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:43.526936 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:46.026316 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:48.026940 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:50.526565 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:53.025988 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:55.027146 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:57.527287 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:00.028971 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:02.526704 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:05.025995 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:07.026612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:09.027839 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:11.526845 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:13.527737 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:16.026967 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:18.028747 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:20.527437 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:21.027372 1484104 pod_ready.go:81] duration metric: took 4m0.009244403s waiting for pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace to be "Ready" ...
	E1225 13:36:21.027405 1484104 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:36:21.027418 1484104 pod_ready.go:38] duration metric: took 4m3.432224558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:36:21.027474 1484104 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:36:21.027560 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:21.027806 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:21.090421 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:21.090464 1484104 cri.go:89] found id: ""
	I1225 13:36:21.090474 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:21.090526 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.095523 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:21.095605 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:21.139092 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:21.139126 1484104 cri.go:89] found id: ""
	I1225 13:36:21.139136 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:21.139206 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.143957 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:21.144038 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:21.190905 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:21.190937 1484104 cri.go:89] found id: ""
	I1225 13:36:21.190948 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:21.191018 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.195814 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:21.195882 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:21.240274 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:21.240307 1484104 cri.go:89] found id: ""
	I1225 13:36:21.240317 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:21.240384 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.244831 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:21.244930 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:21.289367 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:21.289399 1484104 cri.go:89] found id: ""
	I1225 13:36:21.289410 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:21.289478 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.293796 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:21.293878 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:21.338757 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:21.338789 1484104 cri.go:89] found id: ""
	I1225 13:36:21.338808 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:21.338878 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.343145 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:21.343217 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:21.384898 1484104 cri.go:89] found id: ""
	I1225 13:36:21.384929 1484104 logs.go:284] 0 containers: []
	W1225 13:36:21.384936 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:21.384943 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:21.385006 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:21.436776 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:21.436809 1484104 cri.go:89] found id: ""
	I1225 13:36:21.436818 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:21.436871 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.442173 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:21.442210 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:21.886890 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:21.886944 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:21.971380 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:21.971568 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:21.992672 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:21.992724 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:22.015144 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:22.015198 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:22.195011 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:22.195060 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:22.237377 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:22.237423 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:22.284207 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:22.284240 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:22.343882 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:22.343939 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:22.404320 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:22.404356 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:22.465126 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:22.465175 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:22.521920 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:22.521963 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:22.575563 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:22.575601 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:22.627508 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:22.627549 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:22.627808 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:22.627849 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:22.627862 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:22.627871 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:22.627882 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:32.629903 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:36:32.648435 1484104 api_server.go:72] duration metric: took 4m17.129427556s to wait for apiserver process to appear ...
	I1225 13:36:32.648461 1484104 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:36:32.648499 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:32.648567 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:32.705637 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:32.705673 1484104 cri.go:89] found id: ""
	I1225 13:36:32.705685 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:32.705754 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.710516 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:32.710591 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:32.757193 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:32.757225 1484104 cri.go:89] found id: ""
	I1225 13:36:32.757236 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:32.757302 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.762255 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:32.762335 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:32.812666 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:32.812692 1484104 cri.go:89] found id: ""
	I1225 13:36:32.812703 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:32.812758 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.817599 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:32.817676 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:32.861969 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:32.862011 1484104 cri.go:89] found id: ""
	I1225 13:36:32.862021 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:32.862084 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.868439 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:32.868525 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:32.929969 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:32.930006 1484104 cri.go:89] found id: ""
	I1225 13:36:32.930015 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:32.930077 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.936071 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:32.936149 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:32.980256 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:32.980280 1484104 cri.go:89] found id: ""
	I1225 13:36:32.980288 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:32.980345 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.985508 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:32.985605 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:33.029393 1484104 cri.go:89] found id: ""
	I1225 13:36:33.029429 1484104 logs.go:284] 0 containers: []
	W1225 13:36:33.029440 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:33.029448 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:33.029521 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:33.075129 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:33.075156 1484104 cri.go:89] found id: ""
	I1225 13:36:33.075167 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:33.075229 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:33.079900 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:33.079940 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:33.121355 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:33.121391 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:33.205175 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:33.205394 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:33.225359 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:33.225393 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:33.282658 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:33.282710 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:33.334586 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:33.334627 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:33.383538 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:33.383576 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:33.438245 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:33.438284 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:33.487260 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:33.487305 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:33.504627 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:33.504665 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:33.641875 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:33.641912 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:33.692275 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:33.692311 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:33.731932 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:33.731971 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:34.081286 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:34.081325 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:34.081438 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:34.081456 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:34.081465 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:34.081477 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:34.081490 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:44.083633 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:36:44.091721 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 200:
	ok
	I1225 13:36:44.093215 1484104 api_server.go:141] control plane version: v1.28.4
	I1225 13:36:44.093242 1484104 api_server.go:131] duration metric: took 11.444775391s to wait for apiserver health ...
	I1225 13:36:44.093251 1484104 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:36:44.093279 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:44.093330 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:44.135179 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:44.135212 1484104 cri.go:89] found id: ""
	I1225 13:36:44.135229 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:44.135308 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.140367 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:44.140455 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:44.179525 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:44.179557 1484104 cri.go:89] found id: ""
	I1225 13:36:44.179568 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:44.179644 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.184724 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:44.184822 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:44.225306 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:44.225339 1484104 cri.go:89] found id: ""
	I1225 13:36:44.225351 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:44.225418 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.230354 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:44.230459 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:44.272270 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:44.272300 1484104 cri.go:89] found id: ""
	I1225 13:36:44.272311 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:44.272387 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.277110 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:44.277187 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:44.326495 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:44.326519 1484104 cri.go:89] found id: ""
	I1225 13:36:44.326527 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:44.326579 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.333707 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:44.333799 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:44.380378 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:44.380410 1484104 cri.go:89] found id: ""
	I1225 13:36:44.380423 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:44.380488 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.390075 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:44.390171 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:44.440171 1484104 cri.go:89] found id: ""
	I1225 13:36:44.440211 1484104 logs.go:284] 0 containers: []
	W1225 13:36:44.440223 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:44.440233 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:44.440321 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:44.482074 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:44.482104 1484104 cri.go:89] found id: ""
	I1225 13:36:44.482114 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:44.482178 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.487171 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:44.487209 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:44.532144 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:44.532179 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:44.891521 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:44.891568 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:44.938934 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:44.938967 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:45.017433 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:45.017627 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:45.039058 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:45.039097 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:45.054560 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:45.054592 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:45.113698 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:45.113735 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:45.158302 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:45.158342 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:45.204784 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:45.204824 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:45.276442 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:45.276483 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:45.320645 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:45.320678 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:45.452638 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:45.452681 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:45.500718 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:45.500757 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:45.500817 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:45.500833 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:45.500844 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:45.500853 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:45.500859 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:55.510930 1484104 system_pods.go:59] 8 kube-system pods found
	I1225 13:36:55.510962 1484104 system_pods.go:61] "coredns-5dd5756b68-rbmbs" [cd5fc3c3-b9db-437d-8088-2f97921bc3bd] Running
	I1225 13:36:55.510968 1484104 system_pods.go:61] "etcd-default-k8s-diff-port-344803" [3824f946-c4e1-4e9c-a52f-3d6753ce9350] Running
	I1225 13:36:55.510973 1484104 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-344803" [81cf9f5a-6cc3-4d66-956f-6b8a4e2a1ef5] Running
	I1225 13:36:55.510977 1484104 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-344803" [b3cfc8b9-d03b-4a1e-9500-08bb08dc64f3] Running
	I1225 13:36:55.510984 1484104 system_pods.go:61] "kube-proxy-fpk9s" [17d80ffc-e149-4449-aec9-9d90a2fda282] Running
	I1225 13:36:55.510987 1484104 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-344803" [795b56ad-2ee1-45ef-8c7b-1b878be6b0d7] Running
	I1225 13:36:55.510995 1484104 system_pods.go:61] "metrics-server-57f55c9bc5-slv7p" [a51c534d-e6d8-48b9-852f-caf598c8853a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:36:55.510999 1484104 system_pods.go:61] "storage-provisioner" [4bee5e6e-1252-4b3d-8d6c-73515d8567e4] Running
	I1225 13:36:55.511014 1484104 system_pods.go:74] duration metric: took 11.417757674s to wait for pod list to return data ...
	I1225 13:36:55.511025 1484104 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:36:55.514087 1484104 default_sa.go:45] found service account: "default"
	I1225 13:36:55.514112 1484104 default_sa.go:55] duration metric: took 3.081452ms for default service account to be created ...
	I1225 13:36:55.514120 1484104 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:36:55.521321 1484104 system_pods.go:86] 8 kube-system pods found
	I1225 13:36:55.521355 1484104 system_pods.go:89] "coredns-5dd5756b68-rbmbs" [cd5fc3c3-b9db-437d-8088-2f97921bc3bd] Running
	I1225 13:36:55.521365 1484104 system_pods.go:89] "etcd-default-k8s-diff-port-344803" [3824f946-c4e1-4e9c-a52f-3d6753ce9350] Running
	I1225 13:36:55.521370 1484104 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-344803" [81cf9f5a-6cc3-4d66-956f-6b8a4e2a1ef5] Running
	I1225 13:36:55.521375 1484104 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-344803" [b3cfc8b9-d03b-4a1e-9500-08bb08dc64f3] Running
	I1225 13:36:55.521380 1484104 system_pods.go:89] "kube-proxy-fpk9s" [17d80ffc-e149-4449-aec9-9d90a2fda282] Running
	I1225 13:36:55.521387 1484104 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-344803" [795b56ad-2ee1-45ef-8c7b-1b878be6b0d7] Running
	I1225 13:36:55.521397 1484104 system_pods.go:89] "metrics-server-57f55c9bc5-slv7p" [a51c534d-e6d8-48b9-852f-caf598c8853a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:36:55.521409 1484104 system_pods.go:89] "storage-provisioner" [4bee5e6e-1252-4b3d-8d6c-73515d8567e4] Running
	I1225 13:36:55.521421 1484104 system_pods.go:126] duration metric: took 7.294824ms to wait for k8s-apps to be running ...
	I1225 13:36:55.521433 1484104 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:36:55.521492 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:36:55.540217 1484104 system_svc.go:56] duration metric: took 18.766893ms WaitForService to wait for kubelet.
	I1225 13:36:55.540248 1484104 kubeadm.go:581] duration metric: took 4m40.021246946s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:36:55.540271 1484104 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:36:55.544519 1484104 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:36:55.544685 1484104 node_conditions.go:123] node cpu capacity is 2
	I1225 13:36:55.544742 1484104 node_conditions.go:105] duration metric: took 4.463666ms to run NodePressure ...
	I1225 13:36:55.544783 1484104 start.go:228] waiting for startup goroutines ...
	I1225 13:36:55.544795 1484104 start.go:233] waiting for cluster config update ...
	I1225 13:36:55.544810 1484104 start.go:242] writing updated cluster config ...
	I1225 13:36:55.545268 1484104 ssh_runner.go:195] Run: rm -f paused
	I1225 13:36:55.607984 1484104 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 13:36:55.609993 1484104 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-344803" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2023-12-25 13:26:25 UTC, ends at Mon 2023-12-25 13:40:29 UTC. --
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.788569575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511629788550372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=966c0cb6-a8df-49d5-8047-ab1fe8b4a29b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.789287639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9c9b972c-f331-4eaa-b7ba-c4d5288a9714 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.789383822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9c9b972c-f331-4eaa-b7ba-c4d5288a9714 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.789636161Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510854743517342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c807-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55ffef136c76be1cb867b4c4d9753939f7f3879d31b1a949fec922ede380e5d2,PodSandboxId:6eef49ee6443c2c143d21ef7e952b854ef7dc70997024a952018296c871fdf95,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510839434660873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22ab1036-0223-4df4-8c3d-ea4eb111089c,},Annotations:map[string]string{io.kubernetes.container.hash: c20fe0a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4,PodSandboxId:9e278279bae5074a68a2173c176ce5a2a2d459e113efce8550b34c643c706ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703510830648791095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sbn7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de44565-3ada-41a3-bcf0-b9229d3edab8,},Annotations:map[string]string{io.kubernetes.container.hash: 597ec067,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6,PodSandboxId:35b5ee6655e59dced29bc9fdb1d68aaac2e90e482eccf64cee9712d0794baa0f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703510824360413888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4f790b-
a982-4613-b671-c45f037503d9,},Annotations:map[string]string{io.kubernetes.container.hash: 91e97d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703510824162584899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c80
7-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480,PodSandboxId:dc1ed619fa80d14ae9d4f30a871498603b02afd3a445b1b02f04ba4d19996e22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703510815261103225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bde65b4d6cb252e85
87dc9f11057b41,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0,PodSandboxId:1e294a76c33e3f9340e06618eaabe827c2fb6cea75e5ef782a2db0ed35879add,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703510815075342721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc1bf0c03348c1bb22a32100d83871c7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e,PodSandboxId:6332b1316abf7f2e50f4e117edb9b8d3fb8adf760dadb65122d4cc99ff21275b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703510814785049695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6125e6d43fa5fae962ca8ca79893bcbf,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 341c5164,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df,PodSandboxId:b7aa8697e2cc4c8d0753dc660caf085d0c198eca4730c27744cad53eac89bbd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703510814551650602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42018a8975aaea5aada1337c95617dd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a8f0cd9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9c9b972c-f331-4eaa-b7ba-c4d5288a9714 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.842743688Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ff5b2a17-5b91-4cab-bd67-3387809bc0b3 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.842983031Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ff5b2a17-5b91-4cab-bd67-3387809bc0b3 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.845277310Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c9bdb532-fd1e-411f-b1c5-c5fc71015f78 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.845843052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511629845814462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c9bdb532-fd1e-411f-b1c5-c5fc71015f78 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.847099965Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=85889cac-0d16-4dfc-8f03-fb60030b3285 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.847194407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=85889cac-0d16-4dfc-8f03-fb60030b3285 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.847554875Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510854743517342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c807-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55ffef136c76be1cb867b4c4d9753939f7f3879d31b1a949fec922ede380e5d2,PodSandboxId:6eef49ee6443c2c143d21ef7e952b854ef7dc70997024a952018296c871fdf95,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510839434660873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22ab1036-0223-4df4-8c3d-ea4eb111089c,},Annotations:map[string]string{io.kubernetes.container.hash: c20fe0a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4,PodSandboxId:9e278279bae5074a68a2173c176ce5a2a2d459e113efce8550b34c643c706ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703510830648791095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sbn7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de44565-3ada-41a3-bcf0-b9229d3edab8,},Annotations:map[string]string{io.kubernetes.container.hash: 597ec067,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6,PodSandboxId:35b5ee6655e59dced29bc9fdb1d68aaac2e90e482eccf64cee9712d0794baa0f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703510824360413888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4f790b-
a982-4613-b671-c45f037503d9,},Annotations:map[string]string{io.kubernetes.container.hash: 91e97d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703510824162584899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c80
7-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480,PodSandboxId:dc1ed619fa80d14ae9d4f30a871498603b02afd3a445b1b02f04ba4d19996e22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703510815261103225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bde65b4d6cb252e85
87dc9f11057b41,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0,PodSandboxId:1e294a76c33e3f9340e06618eaabe827c2fb6cea75e5ef782a2db0ed35879add,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703510815075342721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc1bf0c03348c1bb22a32100d83871c7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e,PodSandboxId:6332b1316abf7f2e50f4e117edb9b8d3fb8adf760dadb65122d4cc99ff21275b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703510814785049695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6125e6d43fa5fae962ca8ca79893bcbf,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 341c5164,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df,PodSandboxId:b7aa8697e2cc4c8d0753dc660caf085d0c198eca4730c27744cad53eac89bbd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703510814551650602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42018a8975aaea5aada1337c95617dd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a8f0cd9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=85889cac-0d16-4dfc-8f03-fb60030b3285 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.896928579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fc9a1fb9-d452-41a9-bcf3-b17ad31bdcfa name=/runtime.v1.RuntimeService/Version
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.897020225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fc9a1fb9-d452-41a9-bcf3-b17ad31bdcfa name=/runtime.v1.RuntimeService/Version
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.899951850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4a9a840d-4aa5-4332-9193-8cf08fd004c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.900389892Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511629900375689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4a9a840d-4aa5-4332-9193-8cf08fd004c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.901849302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b6c831d1-ed3d-418b-af62-130622e563cc name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.901976340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b6c831d1-ed3d-418b-af62-130622e563cc name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.902188679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510854743517342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c807-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55ffef136c76be1cb867b4c4d9753939f7f3879d31b1a949fec922ede380e5d2,PodSandboxId:6eef49ee6443c2c143d21ef7e952b854ef7dc70997024a952018296c871fdf95,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510839434660873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22ab1036-0223-4df4-8c3d-ea4eb111089c,},Annotations:map[string]string{io.kubernetes.container.hash: c20fe0a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4,PodSandboxId:9e278279bae5074a68a2173c176ce5a2a2d459e113efce8550b34c643c706ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703510830648791095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sbn7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de44565-3ada-41a3-bcf0-b9229d3edab8,},Annotations:map[string]string{io.kubernetes.container.hash: 597ec067,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6,PodSandboxId:35b5ee6655e59dced29bc9fdb1d68aaac2e90e482eccf64cee9712d0794baa0f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703510824360413888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4f790b-
a982-4613-b671-c45f037503d9,},Annotations:map[string]string{io.kubernetes.container.hash: 91e97d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703510824162584899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c80
7-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480,PodSandboxId:dc1ed619fa80d14ae9d4f30a871498603b02afd3a445b1b02f04ba4d19996e22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703510815261103225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bde65b4d6cb252e85
87dc9f11057b41,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0,PodSandboxId:1e294a76c33e3f9340e06618eaabe827c2fb6cea75e5ef782a2db0ed35879add,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703510815075342721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc1bf0c03348c1bb22a32100d83871c7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e,PodSandboxId:6332b1316abf7f2e50f4e117edb9b8d3fb8adf760dadb65122d4cc99ff21275b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703510814785049695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6125e6d43fa5fae962ca8ca79893bcbf,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 341c5164,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df,PodSandboxId:b7aa8697e2cc4c8d0753dc660caf085d0c198eca4730c27744cad53eac89bbd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703510814551650602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42018a8975aaea5aada1337c95617dd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a8f0cd9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b6c831d1-ed3d-418b-af62-130622e563cc name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.949686761Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ad4a7acf-29c6-4550-acb3-74dcbd8026e5 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.949745230Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ad4a7acf-29c6-4550-acb3-74dcbd8026e5 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.951243710Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cddd9695-68ee-4afc-9ce9-e4b95f37475a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.951728444Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511629951712199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=cddd9695-68ee-4afc-9ce9-e4b95f37475a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.952481178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=062a62dc-85d9-4ceb-bc0e-9aa0ea63a8fe name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.952530933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=062a62dc-85d9-4ceb-bc0e-9aa0ea63a8fe name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:40:29 embed-certs-880612 crio[725]: time="2023-12-25 13:40:29.952710888Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510854743517342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c807-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55ffef136c76be1cb867b4c4d9753939f7f3879d31b1a949fec922ede380e5d2,PodSandboxId:6eef49ee6443c2c143d21ef7e952b854ef7dc70997024a952018296c871fdf95,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510839434660873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22ab1036-0223-4df4-8c3d-ea4eb111089c,},Annotations:map[string]string{io.kubernetes.container.hash: c20fe0a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4,PodSandboxId:9e278279bae5074a68a2173c176ce5a2a2d459e113efce8550b34c643c706ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703510830648791095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sbn7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de44565-3ada-41a3-bcf0-b9229d3edab8,},Annotations:map[string]string{io.kubernetes.container.hash: 597ec067,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6,PodSandboxId:35b5ee6655e59dced29bc9fdb1d68aaac2e90e482eccf64cee9712d0794baa0f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703510824360413888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4f790b-
a982-4613-b671-c45f037503d9,},Annotations:map[string]string{io.kubernetes.container.hash: 91e97d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703510824162584899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c80
7-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480,PodSandboxId:dc1ed619fa80d14ae9d4f30a871498603b02afd3a445b1b02f04ba4d19996e22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703510815261103225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bde65b4d6cb252e85
87dc9f11057b41,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0,PodSandboxId:1e294a76c33e3f9340e06618eaabe827c2fb6cea75e5ef782a2db0ed35879add,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703510815075342721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc1bf0c03348c1bb22a32100d83871c7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e,PodSandboxId:6332b1316abf7f2e50f4e117edb9b8d3fb8adf760dadb65122d4cc99ff21275b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703510814785049695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6125e6d43fa5fae962ca8ca79893bcbf,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 341c5164,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df,PodSandboxId:b7aa8697e2cc4c8d0753dc660caf085d0c198eca4730c27744cad53eac89bbd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703510814551650602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42018a8975aaea5aada1337c95617dd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a8f0cd9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=062a62dc-85d9-4ceb-bc0e-9aa0ea63a8fe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0851cb5599abc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   b6c7a9f93ec8e       storage-provisioner
	55ffef136c76b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   6eef49ee6443c       busybox
	ea6832c3489cd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   9e278279bae50       coredns-5dd5756b68-sbn7n
	5a29e019e5e0d       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   35b5ee6655e59       kube-proxy-677d7
	03bfbdc74bd6a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   b6c7a9f93ec8e       storage-provisioner
	868a5855738ae       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   dc1ed619fa80d       kube-scheduler-embed-certs-880612
	e34911f64a889       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   1e294a76c33e3       kube-controller-manager-embed-certs-880612
	9990b54a38a74       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   6332b1316abf7       etcd-embed-certs-880612
	5ec3a53c74277       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   b7aa8697e2cc4       kube-apiserver-embed-certs-880612
	
	
	==> coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40511 - 26727 "HINFO IN 4869349565427911480.5933393956858728803. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009754956s
	
	
	==> describe nodes <==
	Name:               embed-certs-880612
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-880612
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=embed-certs-880612
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_25T13_21_07_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 13:21:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-880612
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Dec 2023 13:40:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 13:37:43 +0000   Mon, 25 Dec 2023 13:21:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 13:37:43 +0000   Mon, 25 Dec 2023 13:21:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 13:37:43 +0000   Mon, 25 Dec 2023 13:21:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 13:37:43 +0000   Mon, 25 Dec 2023 13:27:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.179
	  Hostname:    embed-certs-880612
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 53a35066886d40559dab82026d1a57cf
	  System UUID:                53a35066-886d-4055-9dab-82026d1a57cf
	  Boot ID:                    9dd57709-c8a9-4fd4-af70-63cbbb7017c5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-5dd5756b68-sbn7n                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     19m
	  kube-system                 etcd-embed-certs-880612                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-embed-certs-880612             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-embed-certs-880612    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-677d7                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-embed-certs-880612             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 metrics-server-57f55c9bc5-chnh2               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node embed-certs-880612 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node embed-certs-880612 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node embed-certs-880612 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                kubelet          Node embed-certs-880612 status is now: NodeReady
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-880612 event: Registered Node embed-certs-880612 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-880612 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-880612 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-880612 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-880612 event: Registered Node embed-certs-880612 in Controller
	
	
	==> dmesg <==
	[Dec25 13:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071699] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.519574] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.540576] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156258] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.523338] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.637873] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.110222] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.167392] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.128056] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.255042] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[ +17.537944] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[Dec25 13:27] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.124799] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] <==
	{"level":"info","ts":"2023-12-25T13:27:03.953409Z","caller":"traceutil/trace.go:171","msg":"trace[1590289357] transaction","detail":"{read_only:false; response_revision:498; number_of_response:1; }","duration":"930.306747ms","start":"2023-12-25T13:27:03.023085Z","end":"2023-12-25T13:27:03.953391Z","steps":["trace[1590289357] 'process raft request'  (duration: 912.02435ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:03.956411Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.023073Z","time spent":"933.266048ms","remote":"127.0.0.1:51672","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3687,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:399 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3633 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"warn","ts":"2023-12-25T13:27:03.958661Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"936.94262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" ","response":"range_response_count:1 size:3524"}
	{"level":"info","ts":"2023-12-25T13:27:03.958745Z","caller":"traceutil/trace.go:171","msg":"trace[784495139] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:1; response_revision:498; }","duration":"937.037304ms","start":"2023-12-25T13:27:03.021696Z","end":"2023-12-25T13:27:03.958733Z","steps":["trace[784495139] 'agreement among raft nodes before linearized reading'  (duration: 935.21287ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:03.958793Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.021687Z","time spent":"937.098081ms","remote":"127.0.0.1:51708","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":1,"response size":3547,"request content":"key:\"/registry/clusterroles/edit\" "}
	{"level":"info","ts":"2023-12-25T13:27:03.938188Z","caller":"traceutil/trace.go:171","msg":"trace[1784160754] linearizableReadLoop","detail":"{readStateIndex:528; appliedIndex:526; }","duration":"916.457297ms","start":"2023-12-25T13:27:03.021712Z","end":"2023-12-25T13:27:03.93817Z","steps":["trace[1784160754] 'read index received'  (duration: 548.560182ms)","trace[1784160754] 'applied index is now lower than readState.Index'  (duration: 367.895973ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-25T13:27:03.959225Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"856.848248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-25T13:27:03.959669Z","caller":"traceutil/trace.go:171","msg":"trace[557757183] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:498; }","duration":"857.291426ms","start":"2023-12-25T13:27:03.102365Z","end":"2023-12-25T13:27:03.959656Z","steps":["trace[557757183] 'agreement among raft nodes before linearized reading'  (duration: 856.820987ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:03.959737Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.102349Z","time spent":"857.376956ms","remote":"127.0.0.1:51624","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-12-25T13:27:03.959957Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.759641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/embed-certs-880612.17a4160f2a653693\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2023-12-25T13:27:03.961611Z","caller":"traceutil/trace.go:171","msg":"trace[1621381489] range","detail":"{range_begin:/registry/events/default/embed-certs-880612.17a4160f2a653693; range_end:; response_count:1; response_revision:498; }","duration":"256.412991ms","start":"2023-12-25T13:27:03.705187Z","end":"2023-12-25T13:27:03.9616Z","steps":["trace[1621381489] 'agreement among raft nodes before linearized reading'  (duration: 254.672127ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:04.336786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.225667ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4899789543394873811 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/embed-certs-880612.17a4160f2a653693\" mod_revision:494 > success:<request_put:<key:\"/registry/events/default/embed-certs-880612.17a4160f2a653693\" value_size:628 lease:4899789543394873728 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-880612.17a4160f2a653693\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-25T13:27:04.337218Z","caller":"traceutil/trace.go:171","msg":"trace[318654554] linearizableReadLoop","detail":"{readStateIndex:530; appliedIndex:528; }","duration":"363.258535ms","start":"2023-12-25T13:27:03.973946Z","end":"2023-12-25T13:27:04.337205Z","steps":["trace[318654554] 'read index received'  (duration: 186.499377ms)","trace[318654554] 'applied index is now lower than readState.Index'  (duration: 176.756873ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-25T13:27:04.337346Z","caller":"traceutil/trace.go:171","msg":"trace[1059966004] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"364.57997ms","start":"2023-12-25T13:27:03.972756Z","end":"2023-12-25T13:27:04.337336Z","steps":["trace[1059966004] 'process raft request'  (duration: 364.37631ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:04.337456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.787696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-25T13:27:04.337575Z","caller":"traceutil/trace.go:171","msg":"trace[472828051] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:500; }","duration":"234.91054ms","start":"2023-12-25T13:27:04.102654Z","end":"2023-12-25T13:27:04.337565Z","steps":["trace[472828051] 'agreement among raft nodes before linearized reading'  (duration: 234.753193ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:04.33745Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.972742Z","time spent":"364.657592ms","remote":"127.0.0.1:51672","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2327,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/busybox\" mod_revision:427 > success:<request_put:<key:\"/registry/pods/default/busybox\" value_size:2289 >> failure:<request_range:<key:\"/registry/pods/default/busybox\" > >"}
	{"level":"warn","ts":"2023-12-25T13:27:04.337771Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.92335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" ","response":"range_response_count:1 size:840"}
	{"level":"info","ts":"2023-12-25T13:27:04.337846Z","caller":"traceutil/trace.go:171","msg":"trace[544063427] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:1; response_revision:500; }","duration":"363.9963ms","start":"2023-12-25T13:27:03.973836Z","end":"2023-12-25T13:27:04.337833Z","steps":["trace[544063427] 'agreement among raft nodes before linearized reading'  (duration: 363.885611ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:04.337947Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.973822Z","time spent":"364.117179ms","remote":"127.0.0.1:51708","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":863,"request content":"key:\"/registry/clusterroles/system:aggregate-to-admin\" "}
	{"level":"info","ts":"2023-12-25T13:27:04.337347Z","caller":"traceutil/trace.go:171","msg":"trace[61535417] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"369.108788ms","start":"2023-12-25T13:27:03.968227Z","end":"2023-12-25T13:27:04.337336Z","steps":["trace[61535417] 'process raft request'  (duration: 192.268032ms)","trace[61535417] 'compare'  (duration: 175.801035ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-25T13:27:04.338305Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.968208Z","time spent":"370.056333ms","remote":"127.0.0.1:51648","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":706,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/embed-certs-880612.17a4160f2a653693\" mod_revision:494 > success:<request_put:<key:\"/registry/events/default/embed-certs-880612.17a4160f2a653693\" value_size:628 lease:4899789543394873728 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-880612.17a4160f2a653693\" > >"}
	{"level":"info","ts":"2023-12-25T13:36:58.814408Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":850}
	{"level":"info","ts":"2023-12-25T13:36:58.817783Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":850,"took":"2.475828ms","hash":2274481468}
	{"level":"info","ts":"2023-12-25T13:36:58.817952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2274481468,"revision":850,"compact-revision":-1}
	
	
	==> kernel <==
	 13:40:30 up 14 min,  0 users,  load average: 0.14, 0.12, 0.09
	Linux embed-certs-880612 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] <==
	I1225 13:37:00.903410       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1225 13:37:01.903740       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:37:01.903956       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:37:01.903967       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:37:01.904064       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:37:01.904115       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:37:01.905346       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:38:00.673074       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1225 13:38:01.905125       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:38:01.905436       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:38:01.905489       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:38:01.905647       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:38:01.905758       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:38:01.906717       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:39:00.672150       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1225 13:40:00.673260       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1225 13:40:01.906418       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:40:01.906797       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:40:01.906808       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:40:01.906952       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:40:01.906979       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:40:01.908590       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] <==
	I1225 13:34:46.923371       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:35:16.446340       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:35:16.932710       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:35:46.452415       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:35:46.942851       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:36:16.459054       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:36:16.955396       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:36:46.465547       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:36:46.965265       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:37:16.472426       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:37:16.973210       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:37:46.479726       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:37:46.982239       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:38:16.486344       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:38:16.991154       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1225 13:38:19.501316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="2.659644ms"
	I1225 13:38:33.493077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="100.95µs"
	E1225 13:38:46.494415       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:38:47.000442       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:39:16.501117       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:39:17.013738       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:39:46.515969       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:39:47.022629       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:40:16.524686       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:40:17.032685       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] <==
	I1225 13:27:04.777706       1 server_others.go:69] "Using iptables proxy"
	I1225 13:27:04.802056       1 node.go:141] Successfully retrieved node IP: 192.168.50.179
	I1225 13:27:04.924525       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1225 13:27:04.924724       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1225 13:27:04.929669       1 server_others.go:152] "Using iptables Proxier"
	I1225 13:27:04.929800       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1225 13:27:04.930667       1 server.go:846] "Version info" version="v1.28.4"
	I1225 13:27:04.931025       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 13:27:04.933503       1 config.go:188] "Starting service config controller"
	I1225 13:27:04.942159       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1225 13:27:04.934620       1 config.go:97] "Starting endpoint slice config controller"
	I1225 13:27:04.942303       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1225 13:27:04.938848       1 config.go:315] "Starting node config controller"
	I1225 13:27:04.942316       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1225 13:27:05.042958       1 shared_informer.go:318] Caches are synced for node config
	I1225 13:27:05.042993       1 shared_informer.go:318] Caches are synced for service config
	I1225 13:27:05.043006       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] <==
	I1225 13:26:57.534780       1 serving.go:348] Generated self-signed cert in-memory
	W1225 13:27:00.838983       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 13:27:00.839223       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 13:27:00.839239       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 13:27:00.839338       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 13:27:00.905625       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1225 13:27:00.905720       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 13:27:00.907977       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 13:27:00.908182       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1225 13:27:00.908550       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1225 13:27:00.908644       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1225 13:27:01.008609       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2023-12-25 13:26:25 UTC, ends at Mon 2023-12-25 13:40:30 UTC. --
	Dec 25 13:37:53 embed-certs-880612 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:37:53 embed-certs-880612 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:38:06 embed-certs-880612 kubelet[930]: E1225 13:38:06.488733     930 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 25 13:38:06 embed-certs-880612 kubelet[930]: E1225 13:38:06.488831     930 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 25 13:38:06 embed-certs-880612 kubelet[930]: E1225 13:38:06.489142     930 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6s6q9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-chnh2_kube-system(5a0bb4ec-4652-4e5a-9da4-3ce126a4be11): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 25 13:38:06 embed-certs-880612 kubelet[930]: E1225 13:38:06.489178     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:38:19 embed-certs-880612 kubelet[930]: E1225 13:38:19.481306     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:38:33 embed-certs-880612 kubelet[930]: E1225 13:38:33.473762     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:38:48 embed-certs-880612 kubelet[930]: E1225 13:38:48.472932     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:38:53 embed-certs-880612 kubelet[930]: E1225 13:38:53.488964     930 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:38:53 embed-certs-880612 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:38:53 embed-certs-880612 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:38:53 embed-certs-880612 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:38:59 embed-certs-880612 kubelet[930]: E1225 13:38:59.473396     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:39:12 embed-certs-880612 kubelet[930]: E1225 13:39:12.472997     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:39:26 embed-certs-880612 kubelet[930]: E1225 13:39:26.473476     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:39:37 embed-certs-880612 kubelet[930]: E1225 13:39:37.480140     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:39:48 embed-certs-880612 kubelet[930]: E1225 13:39:48.473261     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:39:53 embed-certs-880612 kubelet[930]: E1225 13:39:53.496522     930 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:39:53 embed-certs-880612 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:39:53 embed-certs-880612 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:39:53 embed-certs-880612 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:39:59 embed-certs-880612 kubelet[930]: E1225 13:39:59.473812     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:40:10 embed-certs-880612 kubelet[930]: E1225 13:40:10.472936     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:40:23 embed-certs-880612 kubelet[930]: E1225 13:40:23.474091     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	
	
	==> storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] <==
	I1225 13:27:04.472343       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 13:27:34.485394       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] <==
	I1225 13:27:34.889572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 13:27:34.906969       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 13:27:34.907226       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 13:27:52.325533       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 13:27:52.328200       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-880612_fedceb4c-3f9b-4180-b70b-44631a2bfe06!
	I1225 13:27:52.329607       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96e34e46-8347-4b63-a898-05e7a93d868f", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-880612_fedceb4c-3f9b-4180-b70b-44631a2bfe06 became leader
	I1225 13:27:52.428480       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-880612_fedceb4c-3f9b-4180-b70b-44631a2bfe06!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880612 -n embed-certs-880612
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-880612 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-chnh2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-880612 describe pod metrics-server-57f55c9bc5-chnh2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-880612 describe pod metrics-server-57f55c9bc5-chnh2: exit status 1 (83.691912ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-chnh2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-880612 describe pod metrics-server-57f55c9bc5-chnh2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-25 13:45:56.292344579 +0000 UTC m=+5380.910821608
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-344803 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-344803 logs -n 25: (2.197227081s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-435411                           | kubernetes-upgrade-435411    | jenkins | v1.32.0 | 25 Dec 23 13:17 UTC | 25 Dec 23 13:17 UTC |
	| start   | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:17 UTC | 25 Dec 23 13:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-021022                              | cert-expiration-021022       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC | 25 Dec 23 13:19 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-198979        | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC | 25 Dec 23 13:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-330063             | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-021022                              | cert-expiration-021022       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	| delete  | -p                                                     | disable-driver-mounts-246503 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	|         | disable-driver-mounts-246503                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:22 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-198979             | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-330063                  | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880612            | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-344803  | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880612                 | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-344803       | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC | 25 Dec 23 13:36 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 13:25:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 13:25:09.868120 1484104 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:25:09.868323 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:25:09.868335 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:25:09.868341 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:25:09.868532 1484104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:25:09.869122 1484104 out.go:303] Setting JSON to false
	I1225 13:25:09.870130 1484104 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":158863,"bootTime":1703351847,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 13:25:09.870205 1484104 start.go:138] virtualization: kvm guest
	I1225 13:25:09.872541 1484104 out.go:177] * [default-k8s-diff-port-344803] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 13:25:09.874217 1484104 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 13:25:09.874305 1484104 notify.go:220] Checking for updates...
	I1225 13:25:09.875839 1484104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 13:25:09.877587 1484104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:25:09.879065 1484104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:25:09.880503 1484104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 13:25:09.881819 1484104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 13:25:09.883607 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:25:09.884026 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:09.884110 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:09.899270 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I1225 13:25:09.899708 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:09.900286 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:25:09.900337 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:09.900687 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:09.900912 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:25:09.901190 1484104 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 13:25:09.901525 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:09.901579 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:09.916694 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39937
	I1225 13:25:09.917130 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:09.917673 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:25:09.917704 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:09.918085 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:09.918333 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:25:09.953536 1484104 out.go:177] * Using the kvm2 driver based on existing profile
	I1225 13:25:09.955050 1484104 start.go:298] selected driver: kvm2
	I1225 13:25:09.955065 1484104 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:25:09.955241 1484104 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 13:25:09.955956 1484104 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:25:09.956047 1484104 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 13:25:09.971769 1484104 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 13:25:09.972199 1484104 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 13:25:09.972296 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:25:09.972313 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:25:09.972334 1484104 start_flags.go:323] config:
	{Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-34480
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:25:09.972534 1484104 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:25:09.975411 1484104 out.go:177] * Starting control plane node default-k8s-diff-port-344803 in cluster default-k8s-diff-port-344803
	I1225 13:25:07.694690 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:09.976744 1484104 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:25:09.976814 1484104 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1225 13:25:09.976830 1484104 cache.go:56] Caching tarball of preloaded images
	I1225 13:25:09.976928 1484104 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 13:25:09.976941 1484104 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1225 13:25:09.977353 1484104 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/config.json ...
	I1225 13:25:09.977710 1484104 start.go:365] acquiring machines lock for default-k8s-diff-port-344803: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:25:10.766734 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:16.850681 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:19.922690 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:25.998796 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:29.070780 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:35.150661 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:38.222822 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:44.302734 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:50.379073 1483118 start.go:369] acquired machines lock for "no-preload-330063" in 3m45.211894916s
	I1225 13:25:50.379143 1483118 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:25:50.379155 1483118 fix.go:54] fixHost starting: 
	I1225 13:25:50.379692 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:50.379739 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:50.395491 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I1225 13:25:50.395953 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:50.396490 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:25:50.396512 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:50.396880 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:50.397080 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:25:50.397224 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:25:50.399083 1483118 fix.go:102] recreateIfNeeded on no-preload-330063: state=Stopped err=<nil>
	I1225 13:25:50.399110 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	W1225 13:25:50.399283 1483118 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:25:50.401483 1483118 out.go:177] * Restarting existing kvm2 VM for "no-preload-330063" ...
	I1225 13:25:47.374782 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:50.376505 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:25:50.376562 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:25:50.378895 1482618 machine.go:91] provisioned docker machine in 4m37.578359235s
	I1225 13:25:50.378958 1482618 fix.go:56] fixHost completed within 4m37.60680956s
	I1225 13:25:50.378968 1482618 start.go:83] releasing machines lock for "old-k8s-version-198979", held for 4m37.606859437s
	W1225 13:25:50.378992 1482618 start.go:694] error starting host: provision: host is not running
	W1225 13:25:50.379100 1482618 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1225 13:25:50.379111 1482618 start.go:709] Will try again in 5 seconds ...
	I1225 13:25:50.403280 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Start
	I1225 13:25:50.403507 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring networks are active...
	I1225 13:25:50.404422 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring network default is active
	I1225 13:25:50.404784 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring network mk-no-preload-330063 is active
	I1225 13:25:50.405087 1483118 main.go:141] libmachine: (no-preload-330063) Getting domain xml...
	I1225 13:25:50.405654 1483118 main.go:141] libmachine: (no-preload-330063) Creating domain...
	I1225 13:25:51.676192 1483118 main.go:141] libmachine: (no-preload-330063) Waiting to get IP...
	I1225 13:25:51.677110 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:51.677638 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:51.677715 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:51.677616 1484268 retry.go:31] will retry after 268.018359ms: waiting for machine to come up
	I1225 13:25:51.947683 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:51.948172 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:51.948198 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:51.948118 1484268 retry.go:31] will retry after 278.681465ms: waiting for machine to come up
	I1225 13:25:52.228745 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.229234 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.229265 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.229166 1484268 retry.go:31] will retry after 329.72609ms: waiting for machine to come up
	I1225 13:25:52.560878 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.561315 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.561348 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.561257 1484268 retry.go:31] will retry after 398.659264ms: waiting for machine to come up
	I1225 13:25:52.962067 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.962596 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.962620 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.962548 1484268 retry.go:31] will retry after 474.736894ms: waiting for machine to come up
	I1225 13:25:53.439369 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:53.439834 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:53.439856 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:53.439795 1484268 retry.go:31] will retry after 632.915199ms: waiting for machine to come up
	I1225 13:25:54.074832 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:54.075320 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:54.075349 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:54.075286 1484268 retry.go:31] will retry after 889.605242ms: waiting for machine to come up
	I1225 13:25:54.966323 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:54.966800 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:54.966822 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:54.966757 1484268 retry.go:31] will retry after 1.322678644s: waiting for machine to come up
	I1225 13:25:55.379741 1482618 start.go:365] acquiring machines lock for old-k8s-version-198979: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:25:56.291182 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:56.291604 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:56.291633 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:56.291567 1484268 retry.go:31] will retry after 1.717647471s: waiting for machine to come up
	I1225 13:25:58.011626 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:58.012081 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:58.012116 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:58.012018 1484268 retry.go:31] will retry after 2.29935858s: waiting for machine to come up
	I1225 13:26:00.314446 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:00.314833 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:00.314858 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:00.314806 1484268 retry.go:31] will retry after 2.50206405s: waiting for machine to come up
	I1225 13:26:02.819965 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:02.820458 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:02.820490 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:02.820403 1484268 retry.go:31] will retry after 2.332185519s: waiting for machine to come up
	I1225 13:26:05.155725 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:05.156228 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:05.156263 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:05.156153 1484268 retry.go:31] will retry after 2.769754662s: waiting for machine to come up
	I1225 13:26:07.929629 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:07.930087 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:07.930126 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:07.930040 1484268 retry.go:31] will retry after 4.407133766s: waiting for machine to come up
	I1225 13:26:13.687348 1483946 start.go:369] acquired machines lock for "embed-certs-880612" in 1m27.002513209s
	I1225 13:26:13.687426 1483946 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:13.687436 1483946 fix.go:54] fixHost starting: 
	I1225 13:26:13.687850 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:13.687916 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:13.706054 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I1225 13:26:13.706521 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:13.707063 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:26:13.707087 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:13.707472 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:13.707645 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:13.707832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:26:13.709643 1483946 fix.go:102] recreateIfNeeded on embed-certs-880612: state=Stopped err=<nil>
	I1225 13:26:13.709676 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	W1225 13:26:13.709868 1483946 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:13.712452 1483946 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880612" ...
	I1225 13:26:12.339674 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.340219 1483118 main.go:141] libmachine: (no-preload-330063) Found IP for machine: 192.168.72.232
	I1225 13:26:12.340243 1483118 main.go:141] libmachine: (no-preload-330063) Reserving static IP address...
	I1225 13:26:12.340263 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has current primary IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.340846 1483118 main.go:141] libmachine: (no-preload-330063) Reserved static IP address: 192.168.72.232
	I1225 13:26:12.340896 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "no-preload-330063", mac: "52:54:00:e9:c3:b6", ip: "192.168.72.232"} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.340912 1483118 main.go:141] libmachine: (no-preload-330063) Waiting for SSH to be available...
	I1225 13:26:12.340947 1483118 main.go:141] libmachine: (no-preload-330063) DBG | skip adding static IP to network mk-no-preload-330063 - found existing host DHCP lease matching {name: "no-preload-330063", mac: "52:54:00:e9:c3:b6", ip: "192.168.72.232"}
	I1225 13:26:12.340962 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Getting to WaitForSSH function...
	I1225 13:26:12.343164 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.343417 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.343448 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.343552 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Using SSH client type: external
	I1225 13:26:12.343566 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa (-rw-------)
	I1225 13:26:12.343587 1483118 main.go:141] libmachine: (no-preload-330063) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:12.343595 1483118 main.go:141] libmachine: (no-preload-330063) DBG | About to run SSH command:
	I1225 13:26:12.343603 1483118 main.go:141] libmachine: (no-preload-330063) DBG | exit 0
	I1225 13:26:12.434661 1483118 main.go:141] libmachine: (no-preload-330063) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:12.435101 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetConfigRaw
	I1225 13:26:12.435827 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:12.438300 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.438673 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.438705 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.438870 1483118 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/config.json ...
	I1225 13:26:12.439074 1483118 machine.go:88] provisioning docker machine ...
	I1225 13:26:12.439093 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:12.439335 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.439556 1483118 buildroot.go:166] provisioning hostname "no-preload-330063"
	I1225 13:26:12.439584 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.439789 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.442273 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.442630 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.442661 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.442768 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.442956 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.443127 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.443271 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.443410 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:12.443772 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:12.443787 1483118 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-330063 && echo "no-preload-330063" | sudo tee /etc/hostname
	I1225 13:26:12.581579 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-330063
	
	I1225 13:26:12.581609 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.584621 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.584949 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.584979 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.585252 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.585495 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.585656 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.585790 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.585947 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:12.586320 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:12.586346 1483118 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-330063' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-330063/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-330063' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:12.717139 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:12.717176 1483118 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:12.717197 1483118 buildroot.go:174] setting up certificates
	I1225 13:26:12.717212 1483118 provision.go:83] configureAuth start
	I1225 13:26:12.717229 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.717570 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:12.720469 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.720828 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.720859 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.721016 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.723432 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.723758 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.723815 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.723944 1483118 provision.go:138] copyHostCerts
	I1225 13:26:12.724021 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:12.724035 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:12.724102 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:12.724207 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:12.724215 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:12.724242 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:12.724323 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:12.724330 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:12.724351 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:12.724408 1483118 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.no-preload-330063 san=[192.168.72.232 192.168.72.232 localhost 127.0.0.1 minikube no-preload-330063]
	I1225 13:26:12.929593 1483118 provision.go:172] copyRemoteCerts
	I1225 13:26:12.929665 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:12.929699 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.932608 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.932934 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.932978 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.933144 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.933389 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.933581 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.933738 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.023574 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:13.047157 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1225 13:26:13.070779 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:26:13.094249 1483118 provision.go:86] duration metric: configureAuth took 377.018818ms
	I1225 13:26:13.094284 1483118 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:13.094538 1483118 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:26:13.094665 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.097705 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.098133 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.098179 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.098429 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.098708 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.098888 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.099029 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.099191 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:13.099516 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:13.099534 1483118 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:13.430084 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:13.430138 1483118 machine.go:91] provisioned docker machine in 991.050011ms
	I1225 13:26:13.430150 1483118 start.go:300] post-start starting for "no-preload-330063" (driver="kvm2")
	I1225 13:26:13.430162 1483118 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:13.430185 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.430616 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:13.430661 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.433623 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.434018 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.434054 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.434191 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.434413 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.434586 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.434700 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.523954 1483118 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:13.528009 1483118 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:13.528040 1483118 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:13.528118 1483118 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:13.528214 1483118 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:13.528359 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:13.536826 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:13.561011 1483118 start.go:303] post-start completed in 130.840608ms
	I1225 13:26:13.561046 1483118 fix.go:56] fixHost completed within 23.181891118s
	I1225 13:26:13.561078 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.563717 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.564040 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.564087 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.564268 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.564504 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.564702 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.564812 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.564965 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:13.565326 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:13.565340 1483118 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:13.687155 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510773.671808211
	
	I1225 13:26:13.687181 1483118 fix.go:206] guest clock: 1703510773.671808211
	I1225 13:26:13.687189 1483118 fix.go:219] Guest: 2023-12-25 13:26:13.671808211 +0000 UTC Remote: 2023-12-25 13:26:13.561052282 +0000 UTC m=+248.574935292 (delta=110.755929ms)
	I1225 13:26:13.687209 1483118 fix.go:190] guest clock delta is within tolerance: 110.755929ms
	I1225 13:26:13.687214 1483118 start.go:83] releasing machines lock for "no-preload-330063", held for 23.308100249s
	I1225 13:26:13.687243 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.687561 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:13.690172 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.690572 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.690604 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.690780 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691362 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691534 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691615 1483118 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:13.691670 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.691807 1483118 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:13.691842 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.694593 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.694871 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.694943 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.694967 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.695202 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.695293 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.695319 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.695452 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.695508 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.695613 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.695725 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.695813 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.695899 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.696068 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.812135 1483118 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:13.817944 1483118 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:13.965641 1483118 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:13.973263 1483118 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:13.973433 1483118 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:13.991077 1483118 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:13.991112 1483118 start.go:475] detecting cgroup driver to use...
	I1225 13:26:13.991197 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:14.005649 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:14.018464 1483118 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:14.018540 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:14.031361 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:14.046011 1483118 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:14.152826 1483118 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:14.281488 1483118 docker.go:219] disabling docker service ...
	I1225 13:26:14.281577 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:14.297584 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:14.311896 1483118 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:14.448141 1483118 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:14.583111 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:14.599419 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:14.619831 1483118 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:14.619909 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.631979 1483118 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:14.632065 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.643119 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.655441 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.666525 1483118 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:14.678080 1483118 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:14.687889 1483118 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:14.687957 1483118 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:14.702290 1483118 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:14.712225 1483118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:14.836207 1483118 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:15.019332 1483118 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:15.019424 1483118 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:15.024755 1483118 start.go:543] Will wait 60s for crictl version
	I1225 13:26:15.024844 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.028652 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:15.074415 1483118 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:15.074550 1483118 ssh_runner.go:195] Run: crio --version
	I1225 13:26:15.128559 1483118 ssh_runner.go:195] Run: crio --version
	I1225 13:26:15.178477 1483118 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1225 13:26:13.714488 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Start
	I1225 13:26:13.714708 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring networks are active...
	I1225 13:26:13.715513 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring network default is active
	I1225 13:26:13.715868 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring network mk-embed-certs-880612 is active
	I1225 13:26:13.716279 1483946 main.go:141] libmachine: (embed-certs-880612) Getting domain xml...
	I1225 13:26:13.716905 1483946 main.go:141] libmachine: (embed-certs-880612) Creating domain...
	I1225 13:26:15.049817 1483946 main.go:141] libmachine: (embed-certs-880612) Waiting to get IP...
	I1225 13:26:15.051040 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.051641 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.051756 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.051615 1484395 retry.go:31] will retry after 199.911042ms: waiting for machine to come up
	I1225 13:26:15.253158 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.260582 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.260620 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.260519 1484395 retry.go:31] will retry after 285.022636ms: waiting for machine to come up
	I1225 13:26:15.547290 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.547756 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.547787 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.547692 1484395 retry.go:31] will retry after 327.637369ms: waiting for machine to come up
	I1225 13:26:15.877618 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.878119 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.878153 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.878058 1484395 retry.go:31] will retry after 384.668489ms: waiting for machine to come up
	I1225 13:26:16.264592 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:16.265056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:16.265084 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:16.265005 1484395 retry.go:31] will retry after 468.984683ms: waiting for machine to come up
	I1225 13:26:15.180205 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:15.183372 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:15.183820 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:15.183862 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:15.184054 1483118 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:15.188935 1483118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:15.202790 1483118 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1225 13:26:15.202839 1483118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:15.245267 1483118 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1225 13:26:15.245297 1483118 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1225 13:26:15.245409 1483118 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.245430 1483118 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.245448 1483118 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.245467 1483118 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.245468 1483118 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1225 13:26:15.245534 1483118 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.245447 1483118 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.245404 1483118 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.247839 1483118 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.247850 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.247874 1483118 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.247911 1483118 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.247980 1483118 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1225 13:26:15.247984 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.248043 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.248281 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.404332 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.405729 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.407712 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1225 13:26:15.412419 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.413201 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.413349 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.453117 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.533541 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.536843 1483118 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1225 13:26:15.536896 1483118 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.536950 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.576965 1483118 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1225 13:26:15.577010 1483118 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.577078 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688643 1483118 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1225 13:26:15.688696 1483118 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.688710 1483118 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1225 13:26:15.688750 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688759 1483118 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.688765 1483118 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1225 13:26:15.688794 1483118 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.688813 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688835 1483118 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1225 13:26:15.688847 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688858 1483118 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.688869 1483118 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1225 13:26:15.688890 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688896 1483118 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.688910 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.688921 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688949 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.706288 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.779043 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1225 13:26:15.779170 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.779219 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.779219 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1225 13:26:15.779181 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.779297 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:15.779309 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.779274 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.779439 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:15.779507 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:15.864891 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:15.865017 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:15.884972 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1225 13:26:15.885024 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:15.885035 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1225 13:26:15.885045 1483118 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.885091 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.885094 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:15.885109 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:15.885146 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1225 13:26:15.885167 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1225 13:26:15.885229 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:15.885239 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1225 13:26:15.885273 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1225 13:26:15.898753 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1225 13:26:17.966777 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.08165399s)
	I1225 13:26:17.966822 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1225 13:26:17.966836 1483118 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.081714527s)
	I1225 13:26:17.966865 1483118 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.081735795s)
	I1225 13:26:17.966848 1483118 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:17.966894 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1225 13:26:17.966874 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1225 13:26:17.966936 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:16.736013 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:16.736519 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:16.736553 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:16.736449 1484395 retry.go:31] will retry after 873.004128ms: waiting for machine to come up
	I1225 13:26:17.611675 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:17.612135 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:17.612160 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:17.612085 1484395 retry.go:31] will retry after 1.093577821s: waiting for machine to come up
	I1225 13:26:18.707411 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:18.707936 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:18.707994 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:18.707904 1484395 retry.go:31] will retry after 1.364130049s: waiting for machine to come up
	I1225 13:26:20.074559 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:20.075102 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:20.075135 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:20.075033 1484395 retry.go:31] will retry after 1.740290763s: waiting for machine to come up
	I1225 13:26:21.677915 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.710943608s)
	I1225 13:26:21.677958 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1225 13:26:21.677990 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:21.678050 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:23.630977 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.952875837s)
	I1225 13:26:23.631018 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1225 13:26:23.631051 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:23.631112 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:21.818166 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:21.818695 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:21.818728 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:21.818641 1484395 retry.go:31] will retry after 2.035498479s: waiting for machine to come up
	I1225 13:26:23.856368 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:23.857094 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:23.857120 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:23.856997 1484395 retry.go:31] will retry after 2.331127519s: waiting for machine to come up
	I1225 13:26:26.191862 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:26.192571 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:26.192608 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:26.192513 1484395 retry.go:31] will retry after 3.191632717s: waiting for machine to come up
	I1225 13:26:26.193816 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.56267278s)
	I1225 13:26:26.193849 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1225 13:26:26.193884 1483118 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:26.193951 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:27.342879 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.148892619s)
	I1225 13:26:27.342913 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1225 13:26:27.342948 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:27.343014 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:29.909035 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.565991605s)
	I1225 13:26:29.909080 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1225 13:26:29.909105 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:29.909159 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:29.386007 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:29.386335 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:29.386366 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:29.386294 1484395 retry.go:31] will retry after 3.786228584s: waiting for machine to come up
	I1225 13:26:34.439583 1484104 start.go:369] acquired machines lock for "default-k8s-diff-port-344803" in 1m24.461830001s
	I1225 13:26:34.439666 1484104 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:34.439686 1484104 fix.go:54] fixHost starting: 
	I1225 13:26:34.440164 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:34.440230 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:34.457403 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46037
	I1225 13:26:34.457867 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:34.458351 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:26:34.458422 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:34.458748 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:34.458989 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:34.459176 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:26:34.460975 1484104 fix.go:102] recreateIfNeeded on default-k8s-diff-port-344803: state=Stopped err=<nil>
	I1225 13:26:34.461008 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	W1225 13:26:34.461188 1484104 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:34.463715 1484104 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-344803" ...
	I1225 13:26:34.465022 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Start
	I1225 13:26:34.465274 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring networks are active...
	I1225 13:26:34.466182 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring network default is active
	I1225 13:26:34.466565 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring network mk-default-k8s-diff-port-344803 is active
	I1225 13:26:34.466922 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Getting domain xml...
	I1225 13:26:34.467691 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Creating domain...
	I1225 13:26:32.065345 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.15614946s)
	I1225 13:26:32.065380 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1225 13:26:32.065414 1483118 cache_images.go:123] Successfully loaded all cached images
	I1225 13:26:32.065421 1483118 cache_images.go:92] LoadImages completed in 16.820112197s
	I1225 13:26:32.065498 1483118 ssh_runner.go:195] Run: crio config
	I1225 13:26:32.120989 1483118 cni.go:84] Creating CNI manager for ""
	I1225 13:26:32.121019 1483118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:32.121045 1483118 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:26:32.121063 1483118 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.232 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-330063 NodeName:no-preload-330063 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:26:32.121216 1483118 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-330063"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:26:32.121297 1483118 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-330063 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-330063 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:26:32.121357 1483118 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1225 13:26:32.132569 1483118 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:26:32.132677 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:26:32.142052 1483118 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1225 13:26:32.158590 1483118 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1225 13:26:32.174761 1483118 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1225 13:26:32.191518 1483118 ssh_runner.go:195] Run: grep 192.168.72.232	control-plane.minikube.internal$ /etc/hosts
	I1225 13:26:32.195353 1483118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:32.206845 1483118 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063 for IP: 192.168.72.232
	I1225 13:26:32.206879 1483118 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:32.207098 1483118 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:26:32.207145 1483118 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:26:32.207212 1483118 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.key
	I1225 13:26:32.207270 1483118 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.key.4e9d87c6
	I1225 13:26:32.207323 1483118 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.key
	I1225 13:26:32.207437 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:26:32.207465 1483118 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:26:32.207475 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:26:32.207513 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:26:32.207539 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:26:32.207565 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:26:32.207607 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:32.208427 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:26:32.231142 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:26:32.253335 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:26:32.275165 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:26:32.297762 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:26:32.320671 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:26:32.344125 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:26:32.368066 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:26:32.390688 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:26:32.412849 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:26:32.435445 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:26:32.457687 1483118 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:26:32.474494 1483118 ssh_runner.go:195] Run: openssl version
	I1225 13:26:32.480146 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:26:32.491141 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.495831 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.495902 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.501393 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:26:32.511643 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:26:32.521843 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.526421 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.526514 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.531988 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:26:32.542920 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:26:32.553604 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.558381 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.558478 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.563913 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:26:32.574591 1483118 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:26:32.579046 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:26:32.584821 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:26:32.590781 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:26:32.596456 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:26:32.601978 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:26:32.607981 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:26:32.613785 1483118 kubeadm.go:404] StartCluster: {Name:no-preload-330063 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-330063 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:26:32.613897 1483118 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:26:32.613955 1483118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:32.651782 1483118 cri.go:89] found id: ""
	I1225 13:26:32.651858 1483118 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:26:32.664617 1483118 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:26:32.664648 1483118 kubeadm.go:636] restartCluster start
	I1225 13:26:32.664710 1483118 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:26:32.674727 1483118 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:32.676090 1483118 kubeconfig.go:92] found "no-preload-330063" server: "https://192.168.72.232:8443"
	I1225 13:26:32.679085 1483118 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:26:32.689716 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:32.689824 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:32.702305 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.189843 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:33.189955 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:33.202514 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.689935 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:33.690048 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:33.703975 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:34.190601 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:34.190722 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:34.203987 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:34.690505 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:34.690639 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:34.701704 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.173890 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.174349 1483946 main.go:141] libmachine: (embed-certs-880612) Found IP for machine: 192.168.50.179
	I1225 13:26:33.174372 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has current primary IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.174405 1483946 main.go:141] libmachine: (embed-certs-880612) Reserving static IP address...
	I1225 13:26:33.174805 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "embed-certs-880612", mac: "52:54:00:a2:ab:67", ip: "192.168.50.179"} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.174845 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | skip adding static IP to network mk-embed-certs-880612 - found existing host DHCP lease matching {name: "embed-certs-880612", mac: "52:54:00:a2:ab:67", ip: "192.168.50.179"}
	I1225 13:26:33.174860 1483946 main.go:141] libmachine: (embed-certs-880612) Reserved static IP address: 192.168.50.179
	I1225 13:26:33.174877 1483946 main.go:141] libmachine: (embed-certs-880612) Waiting for SSH to be available...
	I1225 13:26:33.174892 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Getting to WaitForSSH function...
	I1225 13:26:33.177207 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.177579 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.177609 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.177711 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Using SSH client type: external
	I1225 13:26:33.177737 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa (-rw-------)
	I1225 13:26:33.177777 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:33.177790 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | About to run SSH command:
	I1225 13:26:33.177803 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | exit 0
	I1225 13:26:33.274328 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:33.274736 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetConfigRaw
	I1225 13:26:33.275462 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:33.278056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.278429 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.278483 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.278725 1483946 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/config.json ...
	I1225 13:26:33.278982 1483946 machine.go:88] provisioning docker machine ...
	I1225 13:26:33.279013 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:33.279236 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.279448 1483946 buildroot.go:166] provisioning hostname "embed-certs-880612"
	I1225 13:26:33.279468 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.279619 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.281930 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.282277 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.282311 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.282474 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.282704 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.282885 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.283033 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.283194 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.283700 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.283723 1483946 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880612 && echo "embed-certs-880612" | sudo tee /etc/hostname
	I1225 13:26:33.433456 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880612
	
	I1225 13:26:33.433483 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.436392 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.436794 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.436840 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.437004 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.437233 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.437446 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.437595 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.437783 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.438112 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.438134 1483946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880612/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:33.579776 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:33.579813 1483946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:33.579845 1483946 buildroot.go:174] setting up certificates
	I1225 13:26:33.579859 1483946 provision.go:83] configureAuth start
	I1225 13:26:33.579874 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.580151 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:33.582843 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.583233 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.583266 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.583461 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.585844 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.586216 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.586253 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.586454 1483946 provision.go:138] copyHostCerts
	I1225 13:26:33.586532 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:33.586548 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:33.586604 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:33.586692 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:33.586704 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:33.586723 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:33.586771 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:33.586778 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:33.586797 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:33.586837 1483946 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880612 san=[192.168.50.179 192.168.50.179 localhost 127.0.0.1 minikube embed-certs-880612]
	I1225 13:26:33.640840 1483946 provision.go:172] copyRemoteCerts
	I1225 13:26:33.640921 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:33.640951 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.643970 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.644390 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.644419 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.644606 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.644877 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.645065 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.645204 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:33.744907 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:33.769061 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1225 13:26:33.792125 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:26:33.816115 1483946 provision.go:86] duration metric: configureAuth took 236.215977ms
	I1225 13:26:33.816159 1483946 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:33.816373 1483946 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:26:33.816497 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.819654 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.820075 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.820108 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.820287 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.820519 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.820738 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.820873 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.821068 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.821403 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.821428 1483946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:34.159844 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:34.159882 1483946 machine.go:91] provisioned docker machine in 880.882549ms
	I1225 13:26:34.159897 1483946 start.go:300] post-start starting for "embed-certs-880612" (driver="kvm2")
	I1225 13:26:34.159934 1483946 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:34.159964 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.160327 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:34.160358 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.163009 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.163367 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.163400 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.163600 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.163801 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.163943 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.164093 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.261072 1483946 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:34.265655 1483946 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:34.265686 1483946 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:34.265777 1483946 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:34.265881 1483946 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:34.265996 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:34.276013 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:34.299731 1483946 start.go:303] post-start completed in 139.812994ms
	I1225 13:26:34.299783 1483946 fix.go:56] fixHost completed within 20.612345515s
	I1225 13:26:34.299813 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.302711 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.303189 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.303229 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.303363 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.303617 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.303856 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.304000 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.304198 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:34.304522 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:34.304535 1483946 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:34.439399 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510794.384723199
	
	I1225 13:26:34.439426 1483946 fix.go:206] guest clock: 1703510794.384723199
	I1225 13:26:34.439433 1483946 fix.go:219] Guest: 2023-12-25 13:26:34.384723199 +0000 UTC Remote: 2023-12-25 13:26:34.29978875 +0000 UTC m=+107.780041384 (delta=84.934449ms)
	I1225 13:26:34.439468 1483946 fix.go:190] guest clock delta is within tolerance: 84.934449ms
	I1225 13:26:34.439475 1483946 start.go:83] releasing machines lock for "embed-certs-880612", held for 20.75208465s
	I1225 13:26:34.439518 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.439832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:34.442677 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.443002 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.443031 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.443219 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.443827 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.444029 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.444168 1483946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:34.444225 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.444259 1483946 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:34.444295 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.447106 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447136 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447497 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.447533 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.447553 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447571 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447677 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.447719 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.447860 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.447904 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.447982 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.448094 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.448170 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.448219 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.572590 1483946 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:34.578648 1483946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:34.723874 1483946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:34.731423 1483946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:34.731495 1483946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:34.752447 1483946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:34.752478 1483946 start.go:475] detecting cgroup driver to use...
	I1225 13:26:34.752539 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:34.766782 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:34.781457 1483946 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:34.781548 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:34.798097 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:34.813743 1483946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:34.936843 1483946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:35.053397 1483946 docker.go:219] disabling docker service ...
	I1225 13:26:35.053478 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:35.067702 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:35.079670 1483946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:35.213241 1483946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:35.346105 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:35.359207 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:35.377259 1483946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:35.377347 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.388026 1483946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:35.388116 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.398180 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.411736 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.425888 1483946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:35.436586 1483946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:35.446969 1483946 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:35.447028 1483946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:35.461401 1483946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:35.471896 1483946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:35.619404 1483946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:35.825331 1483946 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:35.825410 1483946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:35.830699 1483946 start.go:543] Will wait 60s for crictl version
	I1225 13:26:35.830779 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:26:35.834938 1483946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:35.874595 1483946 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:35.874717 1483946 ssh_runner.go:195] Run: crio --version
	I1225 13:26:35.924227 1483946 ssh_runner.go:195] Run: crio --version
	I1225 13:26:35.982707 1483946 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 13:26:35.984401 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:35.987241 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:35.987669 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:35.987708 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:35.987991 1483946 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:35.992383 1483946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:36.004918 1483946 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:26:36.005000 1483946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:36.053783 1483946 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 13:26:36.053887 1483946 ssh_runner.go:195] Run: which lz4
	I1225 13:26:36.058040 1483946 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:26:36.062730 1483946 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:26:36.062785 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 13:26:35.824151 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting to get IP...
	I1225 13:26:35.825061 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:35.825643 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:35.825741 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:35.825605 1484550 retry.go:31] will retry after 292.143168ms: waiting for machine to come up
	I1225 13:26:36.119220 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.119741 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.119787 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.119666 1484550 retry.go:31] will retry after 250.340048ms: waiting for machine to come up
	I1225 13:26:36.372343 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.372894 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.372932 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.372840 1484550 retry.go:31] will retry after 434.335692ms: waiting for machine to come up
	I1225 13:26:36.808477 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.809037 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.809070 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.808999 1484550 retry.go:31] will retry after 455.184367ms: waiting for machine to come up
	I1225 13:26:37.265791 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.266330 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.266364 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:37.266278 1484550 retry.go:31] will retry after 487.994897ms: waiting for machine to come up
	I1225 13:26:37.756220 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.756745 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.756774 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:37.756699 1484550 retry.go:31] will retry after 817.108831ms: waiting for machine to come up
	I1225 13:26:38.575846 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:38.576271 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:38.576301 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:38.576222 1484550 retry.go:31] will retry after 1.022104679s: waiting for machine to come up
	I1225 13:26:39.600386 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:39.600863 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:39.600901 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:39.600796 1484550 retry.go:31] will retry after 1.318332419s: waiting for machine to come up
	I1225 13:26:35.190721 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:35.190828 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:35.203971 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:35.689934 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:35.690032 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:35.701978 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:36.190256 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:36.190355 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:36.204476 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:36.689969 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:36.690062 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:36.706632 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.189808 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:37.189921 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:37.203895 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.690391 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:37.690499 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:37.704914 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:38.190575 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:38.190694 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:38.208546 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:38.690090 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:38.690260 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:38.701827 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:39.190421 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:39.190549 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:39.202377 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:39.689978 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:39.690104 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:39.706511 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.963805 1483946 crio.go:444] Took 1.905809 seconds to copy over tarball
	I1225 13:26:37.963892 1483946 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:26:40.988182 1483946 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.024256156s)
	I1225 13:26:40.988214 1483946 crio.go:451] Took 3.024377 seconds to extract the tarball
	I1225 13:26:40.988225 1483946 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:26:41.030256 1483946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:41.085117 1483946 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 13:26:41.085147 1483946 cache_images.go:84] Images are preloaded, skipping loading
	I1225 13:26:41.085236 1483946 ssh_runner.go:195] Run: crio config
	I1225 13:26:41.149962 1483946 cni.go:84] Creating CNI manager for ""
	I1225 13:26:41.149993 1483946 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:41.150020 1483946 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:26:41.150044 1483946 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.179 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880612 NodeName:embed-certs-880612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:26:41.150237 1483946 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880612"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:26:41.150312 1483946 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-880612 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-880612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:26:41.150367 1483946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 13:26:41.160557 1483946 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:26:41.160681 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:26:41.170564 1483946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1225 13:26:41.187315 1483946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:26:41.204638 1483946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1225 13:26:41.222789 1483946 ssh_runner.go:195] Run: grep 192.168.50.179	control-plane.minikube.internal$ /etc/hosts
	I1225 13:26:41.226604 1483946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:41.238315 1483946 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612 for IP: 192.168.50.179
	I1225 13:26:41.238363 1483946 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:41.238614 1483946 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:26:41.238665 1483946 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:26:41.238768 1483946 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/client.key
	I1225 13:26:41.238860 1483946 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.key.518daada
	I1225 13:26:41.238925 1483946 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.key
	I1225 13:26:41.239060 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:26:41.239098 1483946 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:26:41.239122 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:26:41.239167 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:26:41.239204 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:26:41.239245 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:26:41.239300 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:41.240235 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:26:41.265422 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:26:41.290398 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:26:41.315296 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:26:41.339984 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:26:41.363071 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:26:41.392035 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:26:41.419673 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:26:41.444242 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:26:41.468314 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:26:41.493811 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:26:41.518255 1483946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:26:41.535605 1483946 ssh_runner.go:195] Run: openssl version
	I1225 13:26:41.541254 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:26:41.551784 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.556610 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.556686 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.562299 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:26:41.572173 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:26:40.921702 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:40.922293 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:40.922335 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:40.922225 1484550 retry.go:31] will retry after 1.835505717s: waiting for machine to come up
	I1225 13:26:42.760187 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:42.760688 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:42.760714 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:42.760625 1484550 retry.go:31] will retry after 1.646709972s: waiting for machine to come up
	I1225 13:26:44.409540 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:44.410023 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:44.410064 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:44.409998 1484550 retry.go:31] will retry after 1.922870398s: waiting for machine to come up
	I1225 13:26:40.190712 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:40.190831 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:40.205624 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:40.690729 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:40.690835 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:40.702671 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.190145 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.190234 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.201991 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.690585 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.690683 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.704041 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.190633 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.190745 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.202086 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.690049 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.690177 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.701556 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.701597 1483118 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:26:42.701611 1483118 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:26:42.701635 1483118 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:26:42.701719 1483118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:42.745733 1483118 cri.go:89] found id: ""
	I1225 13:26:42.745835 1483118 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:26:42.761355 1483118 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:26:42.773734 1483118 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:26:42.773812 1483118 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:42.785213 1483118 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:42.785242 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:42.927378 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:43.715163 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:43.934803 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:44.024379 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:44.106069 1483118 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:26:44.106200 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:44.607243 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:41.582062 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.692062 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.692156 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.698498 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:26:41.709171 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:26:41.719597 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.724562 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.724628 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.730571 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:26:41.740854 1483946 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:26:41.745792 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:26:41.752228 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:26:41.758318 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:26:41.764486 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:26:41.770859 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:26:41.777155 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:26:41.783382 1483946 kubeadm.go:404] StartCluster: {Name:embed-certs-880612 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-880612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:26:41.783493 1483946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:26:41.783557 1483946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:41.827659 1483946 cri.go:89] found id: ""
	I1225 13:26:41.827738 1483946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:26:41.837713 1483946 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:26:41.837740 1483946 kubeadm.go:636] restartCluster start
	I1225 13:26:41.837788 1483946 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:26:41.846668 1483946 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.847773 1483946 kubeconfig.go:92] found "embed-certs-880612" server: "https://192.168.50.179:8443"
	I1225 13:26:41.850105 1483946 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:26:41.859124 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.859196 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.870194 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.359810 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.359906 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.371508 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.860078 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.860167 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.876302 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:43.359657 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:43.359761 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:43.376765 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:43.859950 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:43.860067 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:43.878778 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:44.359355 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:44.359439 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:44.371780 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:44.859294 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:44.859429 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:44.872286 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:45.359315 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:45.359438 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:45.375926 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:45.859453 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:45.859560 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:45.875608 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:46.360180 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:46.360335 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:46.376143 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:46.335832 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:46.336405 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:46.336439 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:46.336342 1484550 retry.go:31] will retry after 2.75487061s: waiting for machine to come up
	I1225 13:26:49.092529 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:49.092962 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:49.092986 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:49.092926 1484550 retry.go:31] will retry after 4.456958281s: waiting for machine to come up
	I1225 13:26:45.106806 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:45.607205 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.106726 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.606675 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.628821 1483118 api_server.go:72] duration metric: took 2.522750929s to wait for apiserver process to appear ...
	I1225 13:26:46.628852 1483118 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:26:46.628878 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:46.629487 1483118 api_server.go:269] stopped: https://192.168.72.232:8443/healthz: Get "https://192.168.72.232:8443/healthz": dial tcp 192.168.72.232:8443: connect: connection refused
	I1225 13:26:47.129325 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:46.860130 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:46.860255 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:46.875574 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:47.360120 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:47.360254 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:47.375470 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:47.860119 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:47.860205 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:47.875015 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:48.359513 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:48.359649 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:48.374270 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:48.859330 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:48.859424 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:48.871789 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:49.359307 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:49.359403 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:49.371339 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:49.859669 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:49.859766 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:49.872882 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.359345 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:50.359455 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:50.370602 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.859148 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:50.859271 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:50.871042 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:51.359459 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:51.359544 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:51.371252 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.824734 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:26:50.824772 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:26:50.824789 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:50.996870 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:50.996923 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:51.129079 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:51.134132 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:51.134169 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:51.629263 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:51.635273 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:51.635305 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:52.129955 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:52.135538 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 200:
	ok
	I1225 13:26:52.144432 1483118 api_server.go:141] control plane version: v1.29.0-rc.2
	I1225 13:26:52.144470 1483118 api_server.go:131] duration metric: took 5.515610636s to wait for apiserver health ...
	I1225 13:26:52.144483 1483118 cni.go:84] Creating CNI manager for ""
	I1225 13:26:52.144491 1483118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:52.146289 1483118 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:26:52.147684 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:26:52.187156 1483118 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:26:52.210022 1483118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:26:52.225137 1483118 system_pods.go:59] 8 kube-system pods found
	I1225 13:26:52.225190 1483118 system_pods.go:61] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:26:52.225200 1483118 system_pods.go:61] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:26:52.225218 1483118 system_pods.go:61] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:26:52.225230 1483118 system_pods.go:61] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:26:52.225239 1483118 system_pods.go:61] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:26:52.225248 1483118 system_pods.go:61] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:26:52.225262 1483118 system_pods.go:61] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:26:52.225272 1483118 system_pods.go:61] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 13:26:52.225288 1483118 system_pods.go:74] duration metric: took 15.241676ms to wait for pod list to return data ...
	I1225 13:26:52.225300 1483118 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:26:52.229429 1483118 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:26:52.229471 1483118 node_conditions.go:123] node cpu capacity is 2
	I1225 13:26:52.229527 1483118 node_conditions.go:105] duration metric: took 4.217644ms to run NodePressure ...
	I1225 13:26:52.229549 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.630596 1483118 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:26:52.635810 1483118 kubeadm.go:787] kubelet initialised
	I1225 13:26:52.635835 1483118 kubeadm.go:788] duration metric: took 5.192822ms waiting for restarted kubelet to initialise ...
	I1225 13:26:52.635844 1483118 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:26:52.645095 1483118 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.652146 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "coredns-76f75df574-pwk9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.652181 1483118 pod_ready.go:81] duration metric: took 7.040805ms waiting for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.652194 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "coredns-76f75df574-pwk9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.652203 1483118 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.658310 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "etcd-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.658347 1483118 pod_ready.go:81] duration metric: took 6.126503ms waiting for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.658359 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "etcd-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.658369 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.663826 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-apiserver-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.663871 1483118 pod_ready.go:81] duration metric: took 5.492644ms waiting for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.663884 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-apiserver-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.663893 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.669098 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.669137 1483118 pod_ready.go:81] duration metric: took 5.230934ms waiting for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.669148 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.669157 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.035736 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-proxy-jbch6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.035782 1483118 pod_ready.go:81] duration metric: took 366.614624ms waiting for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.035796 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-proxy-jbch6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.035806 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.435089 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-scheduler-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.435123 1483118 pod_ready.go:81] duration metric: took 399.30822ms waiting for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.435135 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-scheduler-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.435145 1483118 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.835248 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.835280 1483118 pod_ready.go:81] duration metric: took 400.124904ms waiting for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.835290 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.835299 1483118 pod_ready.go:38] duration metric: took 1.199443126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:26:53.835317 1483118 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:26:53.848912 1483118 ops.go:34] apiserver oom_adj: -16
	I1225 13:26:53.848954 1483118 kubeadm.go:640] restartCluster took 21.184297233s
	I1225 13:26:53.848965 1483118 kubeadm.go:406] StartCluster complete in 21.235197323s
	I1225 13:26:53.849001 1483118 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:53.849140 1483118 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:26:53.851909 1483118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:53.852278 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:26:53.852353 1483118 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:26:53.852461 1483118 addons.go:69] Setting storage-provisioner=true in profile "no-preload-330063"
	I1225 13:26:53.852495 1483118 addons.go:237] Setting addon storage-provisioner=true in "no-preload-330063"
	W1225 13:26:53.852507 1483118 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:26:53.852514 1483118 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:26:53.852555 1483118 addons.go:69] Setting default-storageclass=true in profile "no-preload-330063"
	I1225 13:26:53.852579 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.852607 1483118 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-330063"
	I1225 13:26:53.852871 1483118 addons.go:69] Setting metrics-server=true in profile "no-preload-330063"
	I1225 13:26:53.852895 1483118 addons.go:237] Setting addon metrics-server=true in "no-preload-330063"
	W1225 13:26:53.852904 1483118 addons.go:246] addon metrics-server should already be in state true
	I1225 13:26:53.852948 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.852985 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.852985 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.853012 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.853012 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.853315 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.853361 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.858023 1483118 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-330063" context rescaled to 1 replicas
	I1225 13:26:53.858077 1483118 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:26:53.861368 1483118 out.go:177] * Verifying Kubernetes components...
	I1225 13:26:53.862819 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:26:53.870209 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
	I1225 13:26:53.870486 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I1225 13:26:53.870693 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.870807 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.871066 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I1225 13:26:53.871329 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.871341 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.871426 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.871433 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.871742 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.871770 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.872271 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.872308 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.872511 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.872896 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.872923 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.873167 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.873180 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.873549 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.873693 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.878043 1483118 addons.go:237] Setting addon default-storageclass=true in "no-preload-330063"
	W1225 13:26:53.878077 1483118 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:26:53.878117 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.878613 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.878657 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.891971 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I1225 13:26:53.892418 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.893067 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.893092 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.893461 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.893634 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.895563 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.897916 1483118 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:26:53.896007 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I1225 13:26:53.899799 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:26:53.899823 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:26:53.899858 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.900294 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.900987 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.901006 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.901451 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.901677 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.901677 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I1225 13:26:53.902344 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.902956 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.902981 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.903419 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.903917 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.903986 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.904022 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.904445 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.904452 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.904471 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.904615 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.904785 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.906582 1483118 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:53.551972 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.552449 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Found IP for machine: 192.168.61.39
	I1225 13:26:53.552500 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has current primary IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.552515 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Reserving static IP address...
	I1225 13:26:53.552918 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-344803", mac: "52:54:00:80:85:71", ip: "192.168.61.39"} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.552967 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | skip adding static IP to network mk-default-k8s-diff-port-344803 - found existing host DHCP lease matching {name: "default-k8s-diff-port-344803", mac: "52:54:00:80:85:71", ip: "192.168.61.39"}
	I1225 13:26:53.552990 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Reserved static IP address: 192.168.61.39
	I1225 13:26:53.553003 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for SSH to be available...
	I1225 13:26:53.553041 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Getting to WaitForSSH function...
	I1225 13:26:53.555272 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.555619 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.555654 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.555753 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Using SSH client type: external
	I1225 13:26:53.555785 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa (-rw-------)
	I1225 13:26:53.555828 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:53.555852 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | About to run SSH command:
	I1225 13:26:53.555872 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | exit 0
	I1225 13:26:53.642574 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:53.643094 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetConfigRaw
	I1225 13:26:53.643946 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:53.646842 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.647308 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.647351 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.647580 1484104 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/config.json ...
	I1225 13:26:53.647806 1484104 machine.go:88] provisioning docker machine ...
	I1225 13:26:53.647827 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:53.648054 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.648255 1484104 buildroot.go:166] provisioning hostname "default-k8s-diff-port-344803"
	I1225 13:26:53.648274 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.648485 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.650935 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.651291 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.651327 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.651479 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:53.651718 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.651887 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.652028 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:53.652213 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:53.652587 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:53.652605 1484104 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-344803 && echo "default-k8s-diff-port-344803" | sudo tee /etc/hostname
	I1225 13:26:53.782284 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-344803
	
	I1225 13:26:53.782315 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.785326 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.785631 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.785668 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.785876 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:53.786149 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.786374 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.786600 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:53.786806 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:53.787202 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:53.787222 1484104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-344803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-344803/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-344803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:53.916809 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:53.916844 1484104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:53.916870 1484104 buildroot.go:174] setting up certificates
	I1225 13:26:53.916882 1484104 provision.go:83] configureAuth start
	I1225 13:26:53.916900 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.917233 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:53.920048 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.920377 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.920402 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.920538 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.923177 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.923404 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.923437 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.923584 1484104 provision.go:138] copyHostCerts
	I1225 13:26:53.923666 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:53.923686 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:53.923763 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:53.923934 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:53.923947 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:53.923978 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:53.924078 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:53.924088 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:53.924115 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:53.924207 1484104 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-344803 san=[192.168.61.39 192.168.61.39 localhost 127.0.0.1 minikube default-k8s-diff-port-344803]
	I1225 13:26:54.014673 1484104 provision.go:172] copyRemoteCerts
	I1225 13:26:54.014739 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:54.014772 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.018361 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.018849 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.018924 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.019089 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.019351 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.019559 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.019949 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.120711 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:54.155907 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1225 13:26:54.192829 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 13:26:54.227819 1484104 provision.go:86] duration metric: configureAuth took 310.912829ms
	I1225 13:26:54.227853 1484104 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:54.228119 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:26:54.228236 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.232535 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.232580 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.232628 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.232945 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.233215 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.233422 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.233608 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.233801 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:54.234295 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:54.234322 1484104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:54.638656 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:54.638772 1484104 machine.go:91] provisioned docker machine in 990.950916ms
	I1225 13:26:54.638798 1484104 start.go:300] post-start starting for "default-k8s-diff-port-344803" (driver="kvm2")
	I1225 13:26:54.638821 1484104 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:54.638883 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.639341 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:54.639383 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.643369 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.643810 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.643863 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.644140 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.644444 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.644624 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.644774 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.740189 1484104 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:54.745972 1484104 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:54.746009 1484104 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:54.746104 1484104 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:54.746229 1484104 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:54.746362 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:54.758199 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:54.794013 1484104 start.go:303] post-start completed in 155.186268ms
	I1225 13:26:54.794048 1484104 fix.go:56] fixHost completed within 20.354368879s
	I1225 13:26:54.794077 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.797620 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.798092 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.798129 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.798423 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.798692 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.798900 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.799067 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.799293 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:54.799807 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:54.799829 1484104 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:54.933026 1482618 start.go:369] acquired machines lock for "old-k8s-version-198979" in 59.553202424s
	I1225 13:26:54.933097 1482618 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:54.933105 1482618 fix.go:54] fixHost starting: 
	I1225 13:26:54.933577 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:54.933620 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:54.956206 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I1225 13:26:54.956801 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:54.958396 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:26:54.958425 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:54.958887 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:54.959164 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:26:54.959384 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:26:54.961270 1482618 fix.go:102] recreateIfNeeded on old-k8s-version-198979: state=Stopped err=<nil>
	I1225 13:26:54.961305 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	W1225 13:26:54.961494 1482618 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:54.963775 1482618 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-198979" ...
	I1225 13:26:53.904908 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.908114 1483118 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:26:53.908130 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:26:53.908147 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.908370 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:53.912254 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.912861 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.912885 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.913096 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.913324 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.913510 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.913629 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:53.959638 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
	I1225 13:26:53.960190 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.960890 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.960913 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.961320 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.961603 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.963927 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.964240 1483118 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:26:53.964262 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:26:53.964282 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.967614 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.968092 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.968155 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.968471 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.968679 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.968879 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.969040 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:54.064639 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:26:54.064674 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:26:54.093609 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:26:54.147415 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:26:54.147449 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:26:54.148976 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:26:54.160381 1483118 node_ready.go:35] waiting up to 6m0s for node "no-preload-330063" to be "Ready" ...
	I1225 13:26:54.160490 1483118 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:26:54.202209 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:26:54.202242 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:26:54.276251 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:26:54.965270 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Start
	I1225 13:26:54.965680 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring networks are active...
	I1225 13:26:54.966477 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring network default is active
	I1225 13:26:54.966919 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring network mk-old-k8s-version-198979 is active
	I1225 13:26:54.967420 1482618 main.go:141] libmachine: (old-k8s-version-198979) Getting domain xml...
	I1225 13:26:54.968585 1482618 main.go:141] libmachine: (old-k8s-version-198979) Creating domain...
	I1225 13:26:55.590940 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.497275379s)
	I1225 13:26:55.591005 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591020 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.591108 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.442107411s)
	I1225 13:26:55.591127 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591136 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.591247 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.314957717s)
	I1225 13:26:55.591268 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591280 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.595765 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.595838 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.595847 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.595859 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.595867 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596016 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596049 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596058 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596067 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.596075 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596177 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596218 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596226 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596236 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.596244 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596485 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596515 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596929 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596972 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596979 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596990 1483118 addons.go:473] Verifying addon metrics-server=true in "no-preload-330063"
	I1225 13:26:55.597032 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.597067 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.597076 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.610755 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.610788 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.611238 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.611264 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.613767 1483118 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1225 13:26:51.859989 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:51.860081 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:51.871647 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:51.871684 1483946 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:26:51.871709 1483946 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:26:51.871725 1483946 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:26:51.871817 1483946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:51.919587 1483946 cri.go:89] found id: ""
	I1225 13:26:51.919706 1483946 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:26:51.935341 1483946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:26:51.944522 1483946 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:26:51.944588 1483946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:51.954607 1483946 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:51.954637 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.092831 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.921485 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.161902 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.270786 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.340226 1483946 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:26:53.340331 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:53.841309 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:54.341486 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:54.841104 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.341409 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.841238 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.867371 1483946 api_server.go:72] duration metric: took 2.52714535s to wait for apiserver process to appear ...
	I1225 13:26:55.867406 1483946 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:26:55.867434 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:26:55.867970 1483946 api_server.go:269] stopped: https://192.168.50.179:8443/healthz: Get "https://192.168.50.179:8443/healthz": dial tcp 192.168.50.179:8443: connect: connection refused
	I1225 13:26:56.368335 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:26:54.932810 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510814.876127642
	
	I1225 13:26:54.932838 1484104 fix.go:206] guest clock: 1703510814.876127642
	I1225 13:26:54.932848 1484104 fix.go:219] Guest: 2023-12-25 13:26:54.876127642 +0000 UTC Remote: 2023-12-25 13:26:54.794053361 +0000 UTC m=+104.977714576 (delta=82.074281ms)
	I1225 13:26:54.932878 1484104 fix.go:190] guest clock delta is within tolerance: 82.074281ms
	I1225 13:26:54.932885 1484104 start.go:83] releasing machines lock for "default-k8s-diff-port-344803", held for 20.493256775s
	I1225 13:26:54.932920 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.933380 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:54.936626 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.937209 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.937262 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.937534 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938366 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938583 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938676 1484104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:54.938722 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.938826 1484104 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:54.938854 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.942392 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.942792 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.942831 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.943292 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.943487 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.943635 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.943764 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.943922 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.944870 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.945020 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.945066 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.945318 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.945498 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.945743 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:55.069674 1484104 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:55.078333 1484104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:55.247706 1484104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:55.256782 1484104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:55.256885 1484104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:55.278269 1484104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:55.278303 1484104 start.go:475] detecting cgroup driver to use...
	I1225 13:26:55.278383 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:55.302307 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:55.322161 1484104 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:55.322345 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:55.342241 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:55.361128 1484104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:55.547880 1484104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:55.693711 1484104 docker.go:219] disabling docker service ...
	I1225 13:26:55.693804 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:55.708058 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:55.721136 1484104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:55.890044 1484104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:56.042549 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:56.061359 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:56.086075 1484104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:56.086169 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.100059 1484104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:56.100162 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.113858 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.127589 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.140964 1484104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:56.155180 1484104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:56.167585 1484104 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:56.167716 1484104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:56.186467 1484104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:56.200044 1484104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:56.339507 1484104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:56.563294 1484104 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:56.563385 1484104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:56.570381 1484104 start.go:543] Will wait 60s for crictl version
	I1225 13:26:56.570477 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:26:56.575675 1484104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:56.617219 1484104 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:56.617322 1484104 ssh_runner.go:195] Run: crio --version
	I1225 13:26:56.679138 1484104 ssh_runner.go:195] Run: crio --version
	I1225 13:26:56.751125 1484104 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 13:26:56.752677 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:56.756612 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:56.757108 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:56.757142 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:56.757502 1484104 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:56.763739 1484104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:56.781952 1484104 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:26:56.782029 1484104 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:56.840852 1484104 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 13:26:56.840939 1484104 ssh_runner.go:195] Run: which lz4
	I1225 13:26:56.845412 1484104 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:26:56.850135 1484104 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:26:56.850181 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 13:26:58.731034 1484104 crio.go:444] Took 1.885656 seconds to copy over tarball
	I1225 13:26:58.731138 1484104 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:26:55.615056 1483118 addons.go:508] enable addons completed in 1.762702944s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1225 13:26:56.169115 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:58.665700 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:56.860066 1482618 main.go:141] libmachine: (old-k8s-version-198979) Waiting to get IP...
	I1225 13:26:56.860987 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:56.861644 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:56.861765 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:56.861626 1484760 retry.go:31] will retry after 198.102922ms: waiting for machine to come up
	I1225 13:26:57.061281 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.062001 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.062029 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.061907 1484760 retry.go:31] will retry after 299.469436ms: waiting for machine to come up
	I1225 13:26:57.362874 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.363385 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.363441 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.363363 1484760 retry.go:31] will retry after 460.796393ms: waiting for machine to come up
	I1225 13:26:57.826330 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.827065 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.827098 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.827021 1484760 retry.go:31] will retry after 397.690798ms: waiting for machine to come up
	I1225 13:26:58.226942 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:58.227490 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:58.227528 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:58.227437 1484760 retry.go:31] will retry after 731.798943ms: waiting for machine to come up
	I1225 13:26:58.960490 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:58.960969 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:58.961000 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:58.960930 1484760 retry.go:31] will retry after 577.614149ms: waiting for machine to come up
	I1225 13:26:59.540871 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:59.541581 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:59.541607 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:59.541494 1484760 retry.go:31] will retry after 1.177902051s: waiting for machine to come up
	I1225 13:27:00.799310 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.799355 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:00.799376 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:00.905272 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.905311 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:00.905330 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:00.922285 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.922324 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:01.367590 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:01.374093 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:01.374155 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.440592 1484104 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.709419632s)
	I1225 13:27:02.440624 1484104 crio.go:451] Took 3.709555 seconds to extract the tarball
	I1225 13:27:02.440636 1484104 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:27:02.504136 1484104 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:02.613720 1484104 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 13:27:02.613752 1484104 cache_images.go:84] Images are preloaded, skipping loading
	I1225 13:27:02.613839 1484104 ssh_runner.go:195] Run: crio config
	I1225 13:27:02.685414 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:27:02.685436 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:02.685459 1484104 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:27:02.685477 1484104 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.39 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-344803 NodeName:default-k8s-diff-port-344803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:27:02.685627 1484104 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.39
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-344803"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:27:02.685710 1484104 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-344803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1225 13:27:02.685778 1484104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 13:27:02.696327 1484104 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:27:02.696420 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:27:02.707369 1484104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1225 13:27:02.728181 1484104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:27:02.748934 1484104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1225 13:27:02.770783 1484104 ssh_runner.go:195] Run: grep 192.168.61.39	control-plane.minikube.internal$ /etc/hosts
	I1225 13:27:02.775946 1484104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:02.790540 1484104 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803 for IP: 192.168.61.39
	I1225 13:27:02.790590 1484104 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:02.790792 1484104 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:27:02.790862 1484104 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:27:02.790961 1484104 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.key
	I1225 13:27:02.859647 1484104 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.key.daee23f3
	I1225 13:27:02.859773 1484104 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.key
	I1225 13:27:02.859934 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:27:02.859993 1484104 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:27:02.860010 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:27:02.860037 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:27:02.860061 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:27:02.860082 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:27:02.860121 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:02.860871 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:27:02.889354 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1225 13:27:02.916983 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:27:02.943348 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:27:02.969940 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:27:02.996224 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:27:03.021662 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:27:03.052589 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:27:03.080437 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:27:03.107973 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:27:03.134921 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:27:03.161948 1484104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:27:03.184606 1484104 ssh_runner.go:195] Run: openssl version
	I1225 13:27:03.192305 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:27:03.204868 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.209793 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.209895 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.216568 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:27:03.229131 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:27:03.241634 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.247328 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.247397 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.253730 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:27:03.267063 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:27:03.281957 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.288393 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.288481 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.295335 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:27:03.307900 1484104 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:27:03.313207 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:27:03.319949 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:27:03.327223 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:27:03.333927 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:27:03.341434 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:27:03.349298 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:27:03.356303 1484104 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:27:03.356409 1484104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:27:03.356463 1484104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:03.407914 1484104 cri.go:89] found id: ""
	I1225 13:27:03.407991 1484104 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:27:03.418903 1484104 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:27:03.418928 1484104 kubeadm.go:636] restartCluster start
	I1225 13:27:03.418981 1484104 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:27:03.429758 1484104 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:03.431242 1484104 kubeconfig.go:92] found "default-k8s-diff-port-344803" server: "https://192.168.61.39:8444"
	I1225 13:27:03.433847 1484104 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:27:03.443564 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:03.443648 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:03.457188 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:03.943692 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:03.943806 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:03.956490 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:04.443680 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:04.443781 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:04.464817 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:00.671397 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:27:01.665347 1483118 node_ready.go:49] node "no-preload-330063" has status "Ready":"True"
	I1225 13:27:01.665383 1483118 node_ready.go:38] duration metric: took 7.504959726s waiting for node "no-preload-330063" to be "Ready" ...
	I1225 13:27:01.665398 1483118 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:01.675515 1483118 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.688377 1483118 pod_ready.go:92] pod "coredns-76f75df574-pwk9h" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:01.688467 1483118 pod_ready.go:81] duration metric: took 12.819049ms waiting for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.688492 1483118 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:03.697007 1483118 pod_ready.go:102] pod "etcd-no-preload-330063" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:04.379595 1483118 pod_ready.go:92] pod "etcd-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.379628 1483118 pod_ready.go:81] duration metric: took 2.691119222s waiting for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.379643 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.393427 1483118 pod_ready.go:92] pod "kube-apiserver-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.393459 1483118 pod_ready.go:81] duration metric: took 13.806505ms waiting for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.393473 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.454291 1483118 pod_ready.go:92] pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.454387 1483118 pod_ready.go:81] duration metric: took 60.903507ms waiting for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.454417 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.525436 1483118 pod_ready.go:92] pod "kube-proxy-jbch6" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.525471 1483118 pod_ready.go:81] duration metric: took 71.040817ms waiting for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.525486 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.546670 1483118 pod_ready.go:92] pod "kube-scheduler-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.546709 1483118 pod_ready.go:81] duration metric: took 21.213348ms waiting for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.546726 1483118 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.868308 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:01.913335 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:01.913393 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.367660 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:02.375382 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:02.375424 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.867590 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:02.873638 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:02.873680 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:03.368014 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:03.377785 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:03.377827 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:03.867933 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:03.873979 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:03.874013 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:04.367576 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:04.377835 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:04.377884 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:04.868444 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:04.879138 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:04.879187 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:05.367519 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:05.377570 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 200:
	ok
	I1225 13:27:05.388572 1483946 api_server.go:141] control plane version: v1.28.4
	I1225 13:27:05.388605 1483946 api_server.go:131] duration metric: took 9.521192442s to wait for apiserver health ...
	I1225 13:27:05.388615 1483946 cni.go:84] Creating CNI manager for ""
	I1225 13:27:05.388625 1483946 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:05.390592 1483946 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:00.720918 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:00.721430 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:00.721457 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:00.721380 1484760 retry.go:31] will retry after 931.125211ms: waiting for machine to come up
	I1225 13:27:01.654661 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:01.655341 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:01.655367 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:01.655263 1484760 retry.go:31] will retry after 1.333090932s: waiting for machine to come up
	I1225 13:27:02.991018 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:02.991520 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:02.991555 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:02.991468 1484760 retry.go:31] will retry after 2.006684909s: waiting for machine to come up
	I1225 13:27:05.000424 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:05.000972 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:05.001023 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:05.000908 1484760 retry.go:31] will retry after 2.72499386s: waiting for machine to come up
	I1225 13:27:05.391952 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:05.406622 1483946 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:05.429599 1483946 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:05.441614 1483946 system_pods.go:59] 9 kube-system pods found
	I1225 13:27:05.441681 1483946 system_pods.go:61] "coredns-5dd5756b68-4jqz4" [026524a6-1f73-4644-8a80-b276326178b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:05.441698 1483946 system_pods.go:61] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:05.441710 1483946 system_pods.go:61] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:27:05.441721 1483946 system_pods.go:61] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:27:05.441732 1483946 system_pods.go:61] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:27:05.441746 1483946 system_pods.go:61] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:27:05.441758 1483946 system_pods.go:61] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:27:05.441773 1483946 system_pods.go:61] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:27:05.441790 1483946 system_pods.go:61] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:27:05.441812 1483946 system_pods.go:74] duration metric: took 12.174684ms to wait for pod list to return data ...
	I1225 13:27:05.441824 1483946 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:05.447018 1483946 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:05.447064 1483946 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:05.447079 1483946 node_conditions.go:105] duration metric: took 5.247366ms to run NodePressure ...
	I1225 13:27:05.447106 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:05.767972 1483946 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:05.774281 1483946 kubeadm.go:787] kubelet initialised
	I1225 13:27:05.774307 1483946 kubeadm.go:788] duration metric: took 6.300121ms waiting for restarted kubelet to initialise ...
	I1225 13:27:05.774316 1483946 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:05.781474 1483946 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.789698 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.789732 1483946 pod_ready.go:81] duration metric: took 8.22748ms waiting for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.789746 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.789758 1483946 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.798517 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.798584 1483946 pod_ready.go:81] duration metric: took 8.811967ms waiting for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.798601 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.798612 1483946 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.804958 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "etcd-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.804998 1483946 pod_ready.go:81] duration metric: took 6.356394ms waiting for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.805018 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "etcd-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.805028 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.834502 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.834549 1483946 pod_ready.go:81] duration metric: took 29.510044ms waiting for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.834561 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.834571 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:06.234676 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.234728 1483946 pod_ready.go:81] duration metric: took 400.145957ms waiting for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:06.234742 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.234752 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:06.634745 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-proxy-677d7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.634785 1483946 pod_ready.go:81] duration metric: took 400.019189ms waiting for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:06.634798 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-proxy-677d7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.634807 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:07.034762 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.034793 1483946 pod_ready.go:81] duration metric: took 399.977148ms waiting for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:07.034803 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.034810 1483946 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:07.433932 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.433969 1483946 pod_ready.go:81] duration metric: took 399.14889ms waiting for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:07.433982 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.433992 1483946 pod_ready.go:38] duration metric: took 1.659666883s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:07.434016 1483946 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:27:07.448377 1483946 ops.go:34] apiserver oom_adj: -16
	I1225 13:27:07.448405 1483946 kubeadm.go:640] restartCluster took 25.610658268s
	I1225 13:27:07.448415 1483946 kubeadm.go:406] StartCluster complete in 25.665045171s
	I1225 13:27:07.448443 1483946 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:07.448530 1483946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:27:07.451369 1483946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:07.453102 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:27:07.453244 1483946 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:27:07.453332 1483946 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880612"
	I1225 13:27:07.453351 1483946 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-880612"
	W1225 13:27:07.453363 1483946 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:27:07.453432 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.453450 1483946 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:27:07.453516 1483946 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880612"
	I1225 13:27:07.453536 1483946 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880612"
	I1225 13:27:07.453860 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.453870 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.453902 1483946 addons.go:69] Setting metrics-server=true in profile "embed-certs-880612"
	I1225 13:27:07.453917 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.453925 1483946 addons.go:237] Setting addon metrics-server=true in "embed-certs-880612"
	W1225 13:27:07.454160 1483946 addons.go:246] addon metrics-server should already be in state true
	I1225 13:27:07.454211 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.453903 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.454601 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.454669 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.476508 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I1225 13:27:07.476720 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I1225 13:27:07.477202 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.477210 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.477794 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.477815 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.477957 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.477971 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.478407 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.478478 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.479041 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.479083 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.480350 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.483762 1483946 addons.go:237] Setting addon default-storageclass=true in "embed-certs-880612"
	W1225 13:27:07.483783 1483946 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:27:07.483816 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.484249 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.484285 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.489369 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I1225 13:27:07.489817 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.490332 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.490354 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.491339 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.494037 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.494083 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.501003 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I1225 13:27:07.501737 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.502399 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.502422 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.502882 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.503092 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.505387 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.507725 1483946 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:07.509099 1483946 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:07.509121 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:27:07.509153 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.513153 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.513923 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.513957 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.514226 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.514426 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.514610 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.515190 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.516933 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38615
	I1225 13:27:07.517681 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.518194 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.518220 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.518784 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1225 13:27:07.519309 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.519400 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.519930 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.519956 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.520525 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.520573 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.520819 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.521050 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.523074 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.525265 1483946 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:27:07.526542 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:27:07.526569 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:27:07.526598 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.530316 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.530846 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.530883 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.531223 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.531571 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.531832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.532070 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.544917 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I1225 13:27:07.545482 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.546037 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.546059 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.546492 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.546850 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.548902 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.549177 1483946 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:07.549196 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:27:07.549218 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.553036 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.553541 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.553572 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.553784 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.554642 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.554893 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.555581 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.676244 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:07.704310 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:07.718012 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:27:07.718043 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:27:07.779041 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:27:07.779073 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:27:07.786154 1483946 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:27:07.812338 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:07.812373 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:27:07.837795 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:07.974099 1483946 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-880612" context rescaled to 1 replicas
	I1225 13:27:07.974158 1483946 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:27:07.977116 1483946 out.go:177] * Verifying Kubernetes components...
	I1225 13:27:07.978618 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:27:09.163988 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.459630406s)
	I1225 13:27:09.164059 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164073 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164091 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.487803106s)
	I1225 13:27:09.164129 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164149 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164617 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.164624 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.164629 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.164639 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.164641 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.164651 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164653 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164661 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164666 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164622 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165025 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165095 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.165121 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.165172 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.165186 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.188483 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.188510 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.188847 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.188898 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.188906 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.193684 1483946 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.215023208s)
	I1225 13:27:09.193736 1483946 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880612" to be "Ready" ...
	I1225 13:27:09.193789 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.355953438s)
	I1225 13:27:09.193825 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.193842 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.194176 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.194192 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.194208 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.194219 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.195998 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.196000 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.196033 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.196044 1483946 addons.go:473] Verifying addon metrics-server=true in "embed-certs-880612"
	I1225 13:27:09.198211 1483946 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:27:04.943819 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:04.943958 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:04.960056 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:05.443699 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:05.443795 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:05.461083 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:05.943713 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:05.943821 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:05.960712 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.444221 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:06.444305 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:06.458894 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.944546 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:06.944630 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:06.958754 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:07.444332 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:07.444462 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:07.491468 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:07.943982 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:07.944135 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:07.960697 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:08.444285 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:08.444408 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:08.461209 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:08.943720 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:08.943866 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:08.959990 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:09.444604 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:09.444727 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:09.463020 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.556605 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:08.560748 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:07.728505 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:07.728994 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:07.729023 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:07.728936 1484760 retry.go:31] will retry after 2.39810797s: waiting for machine to come up
	I1225 13:27:10.129402 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:10.129925 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:10.129960 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:10.129860 1484760 retry.go:31] will retry after 4.278491095s: waiting for machine to come up
	I1225 13:27:09.199531 1483946 addons.go:508] enable addons completed in 1.746293071s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:27:11.199503 1483946 node_ready.go:49] node "embed-certs-880612" has status "Ready":"True"
	I1225 13:27:11.199529 1483946 node_ready.go:38] duration metric: took 2.005779632s waiting for node "embed-certs-880612" to be "Ready" ...
	I1225 13:27:11.199541 1483946 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:11.207447 1483946 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:09.943841 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:09.943948 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:09.960478 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:10.444037 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:10.444309 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:10.463480 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:10.943760 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:10.943886 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:10.960191 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:11.444602 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:11.444702 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:11.458181 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:11.943674 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:11.943783 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:11.956418 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:12.443719 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:12.443835 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:12.456707 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:12.944332 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:12.944434 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:12.957217 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:13.443965 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:13.444076 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:13.455968 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:13.456008 1484104 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:27:13.456051 1484104 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:27:13.456067 1484104 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:27:13.456145 1484104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:13.497063 1484104 cri.go:89] found id: ""
	I1225 13:27:13.497135 1484104 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:27:13.513279 1484104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:27:13.522816 1484104 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:27:13.522885 1484104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:13.532580 1484104 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:13.532612 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:13.668876 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:14.848056 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.179140695s)
	I1225 13:27:14.848090 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:11.072420 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:13.555685 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:14.413456 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:14.414013 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:14.414043 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:14.413960 1484760 retry.go:31] will retry after 4.470102249s: waiting for machine to come up
	I1225 13:27:11.714710 1483946 pod_ready.go:92] pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.714747 1483946 pod_ready.go:81] duration metric: took 507.263948ms waiting for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.714760 1483946 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.720448 1483946 pod_ready.go:92] pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.720472 1483946 pod_ready.go:81] duration metric: took 5.705367ms waiting for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.720481 1483946 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.725691 1483946 pod_ready.go:92] pod "etcd-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.725717 1483946 pod_ready.go:81] duration metric: took 5.229718ms waiting for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.725725 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.238949 1483946 pod_ready.go:92] pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.238979 1483946 pod_ready.go:81] duration metric: took 1.513246575s waiting for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.238992 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.244957 1483946 pod_ready.go:92] pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.244980 1483946 pod_ready.go:81] duration metric: took 5.981457ms waiting for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.244991 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.609255 1483946 pod_ready.go:92] pod "kube-proxy-677d7" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.609282 1483946 pod_ready.go:81] duration metric: took 364.285426ms waiting for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.609292 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.621505 1483946 pod_ready.go:92] pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:15.621540 1483946 pod_ready.go:81] duration metric: took 2.012239726s waiting for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.621553 1483946 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.047153 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:15.142405 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:15.237295 1484104 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:27:15.237406 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:15.737788 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:16.238003 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:16.738328 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:17.238494 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:17.738177 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:18.237676 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:18.259279 1484104 api_server.go:72] duration metric: took 3.021983877s to wait for apiserver process to appear ...
	I1225 13:27:18.259305 1484104 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:27:18.259331 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:15.555810 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:18.056361 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:18.888547 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.889138 1482618 main.go:141] libmachine: (old-k8s-version-198979) Found IP for machine: 192.168.39.186
	I1225 13:27:18.889167 1482618 main.go:141] libmachine: (old-k8s-version-198979) Reserving static IP address...
	I1225 13:27:18.889183 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has current primary IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.889631 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "old-k8s-version-198979", mac: "52:54:00:a1:03:69", ip: "192.168.39.186"} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.889672 1482618 main.go:141] libmachine: (old-k8s-version-198979) Reserved static IP address: 192.168.39.186
	I1225 13:27:18.889702 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | skip adding static IP to network mk-old-k8s-version-198979 - found existing host DHCP lease matching {name: "old-k8s-version-198979", mac: "52:54:00:a1:03:69", ip: "192.168.39.186"}
	I1225 13:27:18.889724 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Getting to WaitForSSH function...
	I1225 13:27:18.889741 1482618 main.go:141] libmachine: (old-k8s-version-198979) Waiting for SSH to be available...
	I1225 13:27:18.892133 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.892475 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.892509 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.892626 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Using SSH client type: external
	I1225 13:27:18.892658 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa (-rw-------)
	I1225 13:27:18.892688 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:27:18.892703 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | About to run SSH command:
	I1225 13:27:18.892722 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | exit 0
	I1225 13:27:18.991797 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | SSH cmd err, output: <nil>: 
	I1225 13:27:18.992203 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetConfigRaw
	I1225 13:27:18.992943 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:18.996016 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.996344 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.996416 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.996762 1482618 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/config.json ...
	I1225 13:27:18.996990 1482618 machine.go:88] provisioning docker machine ...
	I1225 13:27:18.997007 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:18.997254 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:18.997454 1482618 buildroot.go:166] provisioning hostname "old-k8s-version-198979"
	I1225 13:27:18.997483 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:18.997670 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.000725 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.001114 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.001144 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.001332 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.001504 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.001686 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.001836 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.002039 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.002592 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.002614 1482618 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-198979 && echo "old-k8s-version-198979" | sudo tee /etc/hostname
	I1225 13:27:19.148260 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-198979
	
	I1225 13:27:19.148291 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.151692 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.152160 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.152196 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.152350 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.152566 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.152743 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.152941 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.153133 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.153647 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.153678 1482618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-198979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-198979/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-198979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:27:19.294565 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:27:19.294606 1482618 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:27:19.294635 1482618 buildroot.go:174] setting up certificates
	I1225 13:27:19.294649 1482618 provision.go:83] configureAuth start
	I1225 13:27:19.294663 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:19.295039 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:19.298511 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.298933 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.298971 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.299137 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.302045 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.302486 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.302520 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.302682 1482618 provision.go:138] copyHostCerts
	I1225 13:27:19.302777 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:27:19.302806 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:27:19.302869 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:27:19.302994 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:27:19.303012 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:27:19.303042 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:27:19.303103 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:27:19.303113 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:27:19.303131 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:27:19.303177 1482618 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-198979 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube old-k8s-version-198979]
	I1225 13:27:19.444049 1482618 provision.go:172] copyRemoteCerts
	I1225 13:27:19.444142 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:27:19.444180 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.447754 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.448141 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.448174 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.448358 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.448593 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.448818 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.448994 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:19.545298 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:27:19.576678 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:27:19.604520 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1225 13:27:19.631640 1482618 provision.go:86] duration metric: configureAuth took 336.975454ms
	I1225 13:27:19.631674 1482618 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:27:19.631899 1482618 config.go:182] Loaded profile config "old-k8s-version-198979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1225 13:27:19.632012 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.635618 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.636130 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.636166 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.636644 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.636903 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.637088 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.637315 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.637511 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.638005 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.638040 1482618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:27:19.990807 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:27:19.990844 1482618 machine.go:91] provisioned docker machine in 993.840927ms
	I1225 13:27:19.990857 1482618 start.go:300] post-start starting for "old-k8s-version-198979" (driver="kvm2")
	I1225 13:27:19.990870 1482618 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:27:19.990908 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:19.991349 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:27:19.991388 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.994622 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.994980 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.995015 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.995147 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.995402 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.995574 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.995713 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.089652 1482618 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:27:20.094575 1482618 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:27:20.094611 1482618 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:27:20.094716 1482618 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:27:20.094856 1482618 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:27:20.095010 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:27:20.105582 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:20.133802 1482618 start.go:303] post-start completed in 142.928836ms
	I1225 13:27:20.133830 1482618 fix.go:56] fixHost completed within 25.200724583s
	I1225 13:27:20.133860 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.137215 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.137635 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.137670 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.137839 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.138081 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.138322 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.138518 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.138732 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:20.139194 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:20.139228 1482618 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:27:20.268572 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510840.203941272
	
	I1225 13:27:20.268602 1482618 fix.go:206] guest clock: 1703510840.203941272
	I1225 13:27:20.268613 1482618 fix.go:219] Guest: 2023-12-25 13:27:20.203941272 +0000 UTC Remote: 2023-12-25 13:27:20.133835417 +0000 UTC m=+384.781536006 (delta=70.105855ms)
	I1225 13:27:20.268641 1482618 fix.go:190] guest clock delta is within tolerance: 70.105855ms
	I1225 13:27:20.268651 1482618 start.go:83] releasing machines lock for "old-k8s-version-198979", held for 25.335582747s
	I1225 13:27:20.268683 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.268981 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:20.272181 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.272626 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.272666 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.272948 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273612 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273851 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273925 1482618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:27:20.273990 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.274108 1482618 ssh_runner.go:195] Run: cat /version.json
	I1225 13:27:20.274133 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.277090 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277381 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277568 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.277608 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277839 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.278041 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.278066 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.278085 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.278284 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.278293 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.278500 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.278516 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.278691 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.278852 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.395858 1482618 ssh_runner.go:195] Run: systemctl --version
	I1225 13:27:20.403417 1482618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:27:17.629846 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:19.635250 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:20.559485 1482618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:27:20.566356 1482618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:27:20.566487 1482618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:27:20.584531 1482618 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:27:20.584565 1482618 start.go:475] detecting cgroup driver to use...
	I1225 13:27:20.584648 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:27:20.599889 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:27:20.613197 1482618 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:27:20.613278 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:27:20.626972 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:27:20.640990 1482618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:27:20.752941 1482618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:27:20.886880 1482618 docker.go:219] disabling docker service ...
	I1225 13:27:20.886971 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:27:20.903143 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:27:20.919083 1482618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:27:21.042116 1482618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:27:21.171997 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:27:21.185237 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:27:21.204711 1482618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1225 13:27:21.204787 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.215196 1482618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:27:21.215276 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.226411 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.239885 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.250576 1482618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:27:21.263723 1482618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:27:21.274356 1482618 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:27:21.274462 1482618 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:27:21.288126 1482618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:27:21.300772 1482618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:27:21.467651 1482618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:27:21.700509 1482618 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:27:21.700618 1482618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:27:21.708118 1482618 start.go:543] Will wait 60s for crictl version
	I1225 13:27:21.708207 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:21.712687 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:27:21.768465 1482618 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:27:21.768563 1482618 ssh_runner.go:195] Run: crio --version
	I1225 13:27:21.836834 1482618 ssh_runner.go:195] Run: crio --version
	I1225 13:27:21.907627 1482618 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1225 13:27:21.288635 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:21.288669 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:21.288685 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:21.374966 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:21.375010 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:21.760268 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:21.771864 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:21.771898 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:22.259417 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:22.271720 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:22.271779 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:22.760217 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:22.767295 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:22.767333 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:23.259377 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:23.265348 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 200:
	ok
	I1225 13:27:23.275974 1484104 api_server.go:141] control plane version: v1.28.4
	I1225 13:27:23.276010 1484104 api_server.go:131] duration metric: took 5.01669783s to wait for apiserver health ...
	I1225 13:27:23.276024 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:27:23.276033 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:23.278354 1484104 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:23.279804 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:23.300762 1484104 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:23.326548 1484104 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:23.346826 1484104 system_pods.go:59] 8 kube-system pods found
	I1225 13:27:23.346871 1484104 system_pods.go:61] "coredns-5dd5756b68-l7qnn" [860c88a5-5bb9-4556-814a-08f1cc882c0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:23.346884 1484104 system_pods.go:61] "etcd-default-k8s-diff-port-344803" [eca3b322-fbba-4d8e-b8be-10b7f552bd32] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:27:23.346896 1484104 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-344803" [730b8b80-bf80-4769-b4cd-7e81b0600599] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:27:23.346908 1484104 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-344803" [8424df4f-e2d8-4f22-8593-21cf0ccc82eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:27:23.346965 1484104 system_pods.go:61] "kube-proxy-wnjn2" [ed9e8d7e-d237-46ab-84d1-a78f7f931aab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:27:23.346988 1484104 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-344803" [f865e5a4-4b21-4d15-a437-47965f0d1db8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:27:23.347009 1484104 system_pods.go:61] "metrics-server-57f55c9bc5-zgrj5" [d52789c5-dfe7-48e6-9dfd-a7dc5b5be6ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:27:23.347099 1484104 system_pods.go:61] "storage-provisioner" [96723fff-956b-42c4-864b-b18afb0c0285] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 13:27:23.347116 1484104 system_pods.go:74] duration metric: took 20.540773ms to wait for pod list to return data ...
	I1225 13:27:23.347135 1484104 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:23.358619 1484104 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:23.358673 1484104 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:23.358690 1484104 node_conditions.go:105] duration metric: took 11.539548ms to run NodePressure ...
	I1225 13:27:23.358716 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:23.795558 1484104 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:23.804103 1484104 kubeadm.go:787] kubelet initialised
	I1225 13:27:23.804125 1484104 kubeadm.go:788] duration metric: took 8.535185ms waiting for restarted kubelet to initialise ...
	I1225 13:27:23.804133 1484104 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:23.814199 1484104 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:20.557056 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:22.569215 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:25.054111 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:21.909021 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:21.912423 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:21.912802 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:21.912828 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:21.913199 1482618 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 13:27:21.917615 1482618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:21.931709 1482618 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1225 13:27:21.931830 1482618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:21.991133 1482618 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1225 13:27:21.991246 1482618 ssh_runner.go:195] Run: which lz4
	I1225 13:27:21.997721 1482618 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:27:22.003171 1482618 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:27:22.003218 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1225 13:27:23.975639 1482618 crio.go:444] Took 1.977982 seconds to copy over tarball
	I1225 13:27:23.975723 1482618 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:27:21.643721 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:24.132742 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:25.827617 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:28.322507 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:27.055526 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:29.558580 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:27.243294 1482618 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.267535049s)
	I1225 13:27:27.243339 1482618 crio.go:451] Took 3.267670 seconds to extract the tarball
	I1225 13:27:27.243368 1482618 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:27:27.285528 1482618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:27.338914 1482618 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1225 13:27:27.338948 1482618 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1225 13:27:27.339078 1482618 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.339115 1482618 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.339118 1482618 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.339160 1482618 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.339114 1482618 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.339054 1482618 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.339059 1482618 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1225 13:27:27.339060 1482618 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.340631 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.340647 1482618 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.340658 1482618 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.340632 1482618 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1225 13:27:27.340630 1482618 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.340666 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.340630 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.340635 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.502560 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.502567 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.510502 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1225 13:27:27.513052 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.518668 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.522882 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.553027 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.608178 1482618 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1225 13:27:27.608235 1482618 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.608294 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.655271 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.671173 1482618 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1225 13:27:27.671223 1482618 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1225 13:27:27.671283 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.671290 1482618 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1225 13:27:27.671330 1482618 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.671378 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.728043 1482618 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1225 13:27:27.728102 1482618 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.728139 1482618 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1225 13:27:27.728159 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.728187 1482618 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.728222 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.739034 1482618 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1225 13:27:27.739077 1482618 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.739133 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.739156 1482618 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1225 13:27:27.739205 1482618 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.739213 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.739261 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.858062 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1225 13:27:27.858089 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.858143 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.858175 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.858237 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.858301 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1225 13:27:27.858358 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:28.004051 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1225 13:27:28.004125 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1225 13:27:28.004183 1482618 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1225 13:27:28.004226 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1225 13:27:28.004304 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1225 13:27:28.004369 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1225 13:27:28.005012 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1225 13:27:28.009472 1482618 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1225 13:27:28.009491 1482618 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1225 13:27:28.009550 1482618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1225 13:27:29.560553 1482618 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.550970125s)
	I1225 13:27:29.560586 1482618 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1225 13:27:29.560668 1482618 cache_images.go:92] LoadImages completed in 2.22170407s
	W1225 13:27:29.560766 1482618 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1225 13:27:29.560846 1482618 ssh_runner.go:195] Run: crio config
	I1225 13:27:29.639267 1482618 cni.go:84] Creating CNI manager for ""
	I1225 13:27:29.639298 1482618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:29.639324 1482618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:27:29.639375 1482618 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-198979 NodeName:old-k8s-version-198979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1225 13:27:29.639598 1482618 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-198979"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-198979
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.186:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:27:29.639711 1482618 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-198979 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-198979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:27:29.639800 1482618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1225 13:27:29.649536 1482618 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:27:29.649614 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:27:29.658251 1482618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1225 13:27:29.678532 1482618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:27:29.698314 1482618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1225 13:27:29.718873 1482618 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I1225 13:27:29.723656 1482618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:29.737736 1482618 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979 for IP: 192.168.39.186
	I1225 13:27:29.737787 1482618 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:29.738006 1482618 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:27:29.738069 1482618 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:27:29.738147 1482618 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.key
	I1225 13:27:29.738211 1482618 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.key.d0691019
	I1225 13:27:29.738252 1482618 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.key
	I1225 13:27:29.738456 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:27:29.738501 1482618 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:27:29.738511 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:27:29.738543 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:27:29.738578 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:27:29.738617 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:27:29.738682 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:29.739444 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:27:29.765303 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:27:29.790702 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:27:29.818835 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 13:27:29.845659 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:27:29.872043 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:27:29.902732 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:27:29.928410 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:27:29.954350 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:27:29.978557 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:27:30.007243 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:27:30.036876 1482618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:27:30.055990 1482618 ssh_runner.go:195] Run: openssl version
	I1225 13:27:30.062813 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:27:30.075937 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.082034 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.082145 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.089645 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:27:30.102657 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:27:30.115701 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.120635 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.120711 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.128051 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:27:30.139465 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:27:30.151046 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.156574 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.156656 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.162736 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:27:30.174356 1482618 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:27:30.180962 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:27:30.187746 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:27:30.194481 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:27:30.202279 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:27:30.210555 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:27:30.218734 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:27:30.225325 1482618 kubeadm.go:404] StartCluster: {Name:old-k8s-version-198979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-198979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:27:30.225424 1482618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:27:30.225478 1482618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:30.274739 1482618 cri.go:89] found id: ""
	I1225 13:27:30.274842 1482618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:27:30.285949 1482618 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:27:30.285980 1482618 kubeadm.go:636] restartCluster start
	I1225 13:27:30.286051 1482618 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:27:30.295521 1482618 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:30.296804 1482618 kubeconfig.go:92] found "old-k8s-version-198979" server: "https://192.168.39.186:8443"
	I1225 13:27:30.299493 1482618 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:27:30.308641 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:30.308745 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:30.320654 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:26.631365 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:29.129943 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:31.131590 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:30.329682 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:31.824743 1484104 pod_ready.go:92] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:31.824770 1484104 pod_ready.go:81] duration metric: took 8.010540801s waiting for pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.824781 1484104 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.830321 1484104 pod_ready.go:92] pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:31.830347 1484104 pod_ready.go:81] duration metric: took 5.559816ms waiting for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.830358 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.338865 1484104 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:32.338898 1484104 pod_ready.go:81] duration metric: took 508.532498ms waiting for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.338913 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.846030 1484104 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:33.846054 1484104 pod_ready.go:81] duration metric: took 1.507133449s waiting for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.846065 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wnjn2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.851826 1484104 pod_ready.go:92] pod "kube-proxy-wnjn2" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:33.851846 1484104 pod_ready.go:81] duration metric: took 5.775207ms waiting for pod "kube-proxy-wnjn2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.851855 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.054359 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:34.054586 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:30.809359 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:30.809482 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:30.821194 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:31.308690 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:31.308830 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:31.322775 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:31.809511 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:31.809612 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:31.823928 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:32.309450 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:32.309569 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:32.320937 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:32.809587 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:32.809686 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:32.822957 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.308905 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:33.308992 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:33.321195 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.808702 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:33.808803 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:33.820073 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:34.309661 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:34.309760 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:34.322931 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:34.809599 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:34.809724 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:34.825650 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:35.308697 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:35.308798 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:35.321313 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.630973 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.128884 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:35.859839 1484104 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.359809 1484104 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:36.359838 1484104 pod_ready.go:81] duration metric: took 2.507975576s waiting for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:36.359853 1484104 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:38.371707 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.554699 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:39.053732 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:35.809083 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:35.809186 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:35.821434 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:36.309100 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:36.309181 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:36.322566 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:36.809026 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:36.809136 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:36.820791 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:37.309382 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:37.309501 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:37.321365 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:37.809397 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:37.809515 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:37.821538 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:38.309716 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:38.309819 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:38.321060 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:38.809627 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:38.809728 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:38.821784 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:39.309363 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:39.309483 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:39.320881 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:39.809420 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:39.809597 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:39.820752 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:40.308911 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:40.309009 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:40.322568 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:40.322614 1482618 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:27:40.322653 1482618 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:27:40.322670 1482618 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:27:40.322730 1482618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:40.366271 1482618 cri.go:89] found id: ""
	I1225 13:27:40.366365 1482618 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:27:40.383123 1482618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:27:40.392329 1482618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:27:40.392412 1482618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:40.401435 1482618 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:40.401471 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:38.131920 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.629516 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.868311 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:42.872952 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:41.054026 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:43.054332 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.538996 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.466467 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.697265 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.796796 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.898179 1482618 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:27:41.898290 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:42.398616 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:42.899373 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.399246 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.898788 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.923617 1482618 api_server.go:72] duration metric: took 2.025431683s to wait for apiserver process to appear ...
	I1225 13:27:43.923650 1482618 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:27:43.923684 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:42.632296 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.128501 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.368613 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:47.868011 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.054778 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:47.559938 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:48.924695 1482618 api_server.go:269] stopped: https://192.168.39.186:8443/healthz: Get "https://192.168.39.186:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 13:27:48.924755 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:49.954284 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:49.954379 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:49.954401 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:49.985515 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:49.985568 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:50.424616 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:50.431560 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1225 13:27:50.431604 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1225 13:27:50.924173 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:50.935578 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1225 13:27:50.935622 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1225 13:27:51.424341 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:51.431709 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1225 13:27:51.440822 1482618 api_server.go:141] control plane version: v1.16.0
	I1225 13:27:51.440855 1482618 api_server.go:131] duration metric: took 7.517198191s to wait for apiserver health ...
	I1225 13:27:51.440866 1482618 cni.go:84] Creating CNI manager for ""
	I1225 13:27:51.440873 1482618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:51.442446 1482618 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:47.130936 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:49.132275 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:51.443830 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:51.456628 1482618 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:51.477822 1482618 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:51.487046 1482618 system_pods.go:59] 7 kube-system pods found
	I1225 13:27:51.487082 1482618 system_pods.go:61] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:27:51.487087 1482618 system_pods.go:61] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:27:51.487091 1482618 system_pods.go:61] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:27:51.487096 1482618 system_pods.go:61] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Pending
	I1225 13:27:51.487100 1482618 system_pods.go:61] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:27:51.487103 1482618 system_pods.go:61] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:27:51.487107 1482618 system_pods.go:61] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:27:51.487113 1482618 system_pods.go:74] duration metric: took 9.266811ms to wait for pod list to return data ...
	I1225 13:27:51.487120 1482618 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:51.491782 1482618 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:51.491817 1482618 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:51.491831 1482618 node_conditions.go:105] duration metric: took 4.70597ms to run NodePressure ...
	I1225 13:27:51.491855 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:51.768658 1482618 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:51.776258 1482618 kubeadm.go:787] kubelet initialised
	I1225 13:27:51.776283 1482618 kubeadm.go:788] duration metric: took 7.588357ms waiting for restarted kubelet to initialise ...
	I1225 13:27:51.776293 1482618 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:51.784053 1482618 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.791273 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.791314 1482618 pod_ready.go:81] duration metric: took 7.223677ms waiting for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.791328 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.791338 1482618 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.801453 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "etcd-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.801491 1482618 pod_ready.go:81] duration metric: took 10.138221ms waiting for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.801505 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "etcd-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.801514 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.809536 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.809577 1482618 pod_ready.go:81] duration metric: took 8.051285ms waiting for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.809590 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.809608 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.882231 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.882268 1482618 pod_ready.go:81] duration metric: took 72.643349ms waiting for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.882299 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.882309 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:52.282486 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-proxy-vw9lf" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.282531 1482618 pod_ready.go:81] duration metric: took 400.208562ms waiting for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:52.282543 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-proxy-vw9lf" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.282552 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:52.689279 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.689329 1482618 pod_ready.go:81] duration metric: took 406.764819ms waiting for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:52.689343 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.689353 1482618 pod_ready.go:38] duration metric: took 913.049281ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:52.689387 1482618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:27:52.705601 1482618 ops.go:34] apiserver oom_adj: -16
	I1225 13:27:52.705628 1482618 kubeadm.go:640] restartCluster took 22.419638621s
	I1225 13:27:52.705639 1482618 kubeadm.go:406] StartCluster complete in 22.480335985s
	I1225 13:27:52.705663 1482618 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:52.705760 1482618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:27:52.708825 1482618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:52.709185 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:27:52.709313 1482618 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:27:52.709404 1482618 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709427 1482618 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-198979"
	W1225 13:27:52.709435 1482618 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:27:52.709443 1482618 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709460 1482618 config.go:182] Loaded profile config "old-k8s-version-198979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1225 13:27:52.709466 1482618 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709475 1482618 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-198979"
	I1225 13:27:52.709482 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.709488 1482618 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-198979"
	W1225 13:27:52.709502 1482618 addons.go:246] addon metrics-server should already be in state true
	I1225 13:27:52.709553 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.709914 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.709953 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.709964 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.709992 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.709965 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.710046 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.729360 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1225 13:27:52.730016 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.730343 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I1225 13:27:52.730527 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I1225 13:27:52.730777 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.730808 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.730852 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.731329 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.731365 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.731381 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.731589 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.731638 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.731715 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.732311 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.732360 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.732731 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.732763 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.733225 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.733787 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.733859 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.735675 1482618 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-198979"
	W1225 13:27:52.735694 1482618 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:27:52.735725 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.736079 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.736117 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.751072 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40177
	I1225 13:27:52.752097 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.753002 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.753022 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.753502 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.753741 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.756158 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.758410 1482618 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:52.758080 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42869
	I1225 13:27:52.759927 1482618 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:52.759942 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:27:52.759963 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.760521 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.761648 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.761665 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.762046 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.762823 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.762872 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.763974 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.764712 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.764748 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.764752 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.765009 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.765216 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I1225 13:27:52.765216 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.765461 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.791493 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.792265 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.792294 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.792795 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.793023 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.795238 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.799536 1482618 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:27:52.800892 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:27:52.800920 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:27:52.800955 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.804762 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.806571 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.806568 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.806606 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.806957 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.807115 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.807260 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.811419 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I1225 13:27:52.811816 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.812352 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.812379 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.812872 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.813083 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.814823 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.815122 1482618 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:52.815138 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:27:52.815158 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.818411 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.818892 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.818926 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.819253 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.819504 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.819705 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.819981 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.963144 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:52.974697 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:27:52.974733 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:27:53.021391 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:53.039959 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:27:53.039991 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:27:53.121390 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:53.121421 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:27:53.196232 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:53.256419 1482618 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-198979" context rescaled to 1 replicas
	I1225 13:27:53.256479 1482618 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:27:53.258366 1482618 out.go:177] * Verifying Kubernetes components...
	I1225 13:27:53.259807 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:27:53.276151 1482618 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:27:53.687341 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.687374 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.687666 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.687690 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.687701 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.687710 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.689261 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.689286 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.689294 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.725954 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.725985 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.726715 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.726737 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.726743 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.726776 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.726787 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.727040 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.727054 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.727061 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.744318 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.744356 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.744696 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.744745 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.846817 1482618 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-198979" to be "Ready" ...
	I1225 13:27:53.846878 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.846899 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.847234 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.847301 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.847317 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.847329 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.847351 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.847728 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.847767 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.847793 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.847810 1482618 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-198979"
	I1225 13:27:53.850107 1482618 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:27:49.870506 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:52.369916 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:50.056130 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:52.562555 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:53.851456 1482618 addons.go:508] enable addons completed in 1.14214354s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:27:51.635205 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:54.131852 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:54.868902 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:57.367267 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:59.368997 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:55.057522 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:57.555214 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:55.851206 1482618 node_ready.go:58] node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:58.350906 1482618 node_ready.go:58] node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:28:00.350892 1482618 node_ready.go:49] node "old-k8s-version-198979" has status "Ready":"True"
	I1225 13:28:00.350918 1482618 node_ready.go:38] duration metric: took 6.504066205s waiting for node "old-k8s-version-198979" to be "Ready" ...
	I1225 13:28:00.350928 1482618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:28:00.355882 1482618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.362249 1482618 pod_ready.go:92] pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.362281 1482618 pod_ready.go:81] duration metric: took 6.362168ms waiting for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.362290 1482618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.367738 1482618 pod_ready.go:92] pod "etcd-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.367777 1482618 pod_ready.go:81] duration metric: took 5.478984ms waiting for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.367790 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.373724 1482618 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.373754 1482618 pod_ready.go:81] duration metric: took 5.95479ms waiting for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.373774 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.380810 1482618 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.380841 1482618 pod_ready.go:81] duration metric: took 7.058206ms waiting for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.380854 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:56.635216 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:59.129464 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:01.132131 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:00.750612 1482618 pod_ready.go:92] pod "kube-proxy-vw9lf" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.750641 1482618 pod_ready.go:81] duration metric: took 369.779347ms waiting for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.750651 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:01.151567 1482618 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:01.151596 1482618 pod_ready.go:81] duration metric: took 400.937167ms waiting for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:01.151617 1482618 pod_ready.go:38] duration metric: took 800.677743ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:28:01.151634 1482618 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:28:01.151694 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:28:01.170319 1482618 api_server.go:72] duration metric: took 7.913795186s to wait for apiserver process to appear ...
	I1225 13:28:01.170349 1482618 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:28:01.170368 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:28:01.177133 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1225 13:28:01.178326 1482618 api_server.go:141] control plane version: v1.16.0
	I1225 13:28:01.178351 1482618 api_server.go:131] duration metric: took 7.994163ms to wait for apiserver health ...
	I1225 13:28:01.178361 1482618 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:28:01.352663 1482618 system_pods.go:59] 7 kube-system pods found
	I1225 13:28:01.352693 1482618 system_pods.go:61] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:28:01.352697 1482618 system_pods.go:61] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:28:01.352702 1482618 system_pods.go:61] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:28:01.352706 1482618 system_pods.go:61] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Running
	I1225 13:28:01.352710 1482618 system_pods.go:61] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:28:01.352714 1482618 system_pods.go:61] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:28:01.352718 1482618 system_pods.go:61] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:28:01.352724 1482618 system_pods.go:74] duration metric: took 174.35745ms to wait for pod list to return data ...
	I1225 13:28:01.352731 1482618 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:28:01.554095 1482618 default_sa.go:45] found service account: "default"
	I1225 13:28:01.554129 1482618 default_sa.go:55] duration metric: took 201.391529ms for default service account to be created ...
	I1225 13:28:01.554139 1482618 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:28:01.757666 1482618 system_pods.go:86] 7 kube-system pods found
	I1225 13:28:01.757712 1482618 system_pods.go:89] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:28:01.757724 1482618 system_pods.go:89] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:28:01.757731 1482618 system_pods.go:89] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:28:01.757747 1482618 system_pods.go:89] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Running
	I1225 13:28:01.757754 1482618 system_pods.go:89] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:28:01.757763 1482618 system_pods.go:89] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:28:01.757769 1482618 system_pods.go:89] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:28:01.757785 1482618 system_pods.go:126] duration metric: took 203.63938ms to wait for k8s-apps to be running ...
	I1225 13:28:01.757800 1482618 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:28:01.757863 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:28:01.771792 1482618 system_svc.go:56] duration metric: took 13.980705ms WaitForService to wait for kubelet.
	I1225 13:28:01.771821 1482618 kubeadm.go:581] duration metric: took 8.515309843s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:28:01.771843 1482618 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:28:01.952426 1482618 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:28:01.952463 1482618 node_conditions.go:123] node cpu capacity is 2
	I1225 13:28:01.952477 1482618 node_conditions.go:105] duration metric: took 180.629128ms to run NodePressure ...
	I1225 13:28:01.952493 1482618 start.go:228] waiting for startup goroutines ...
	I1225 13:28:01.952500 1482618 start.go:233] waiting for cluster config update ...
	I1225 13:28:01.952512 1482618 start.go:242] writing updated cluster config ...
	I1225 13:28:01.952974 1482618 ssh_runner.go:195] Run: rm -f paused
	I1225 13:28:02.007549 1482618 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I1225 13:28:02.009559 1482618 out.go:177] 
	W1225 13:28:02.011242 1482618 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I1225 13:28:02.012738 1482618 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1225 13:28:02.014029 1482618 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-198979" cluster and "default" namespace by default
	I1225 13:28:01.869370 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:04.368824 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:00.055713 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:02.553981 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:04.554824 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:03.629358 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:06.130616 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:06.869993 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:09.367869 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:07.054835 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:09.554904 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:08.130786 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:10.632435 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:11.368789 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:13.867665 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:12.054007 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:14.554676 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:13.129854 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:15.628997 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:15.869048 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:18.368070 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:16.557633 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:19.054486 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:17.629072 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:20.129902 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:20.868173 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:22.868637 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:21.555027 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:24.054858 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:22.133148 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:24.630133 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:25.369437 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:27.870029 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:26.056198 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:28.555876 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:27.129583 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:29.629963 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:30.367773 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:32.368497 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:34.369791 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:31.053212 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:33.054315 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:32.128310 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:34.130650 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:36.869325 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:39.367488 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:35.056761 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:37.554917 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:36.632857 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:39.129518 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:41.368425 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:43.868157 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:40.054854 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:42.555015 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:45.053900 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:41.630558 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:44.132072 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:46.366422 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:48.368331 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:47.056378 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:49.555186 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:46.629415 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:49.129249 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:51.129692 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:50.868321 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:53.366805 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:52.053785 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:54.057533 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:53.629427 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:55.629652 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:55.368197 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:57.867659 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.868187 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:56.556558 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.055474 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:57.629912 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.630858 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:01.868360 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:03.870936 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:01.555132 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:04.053887 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:02.127901 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:04.131186 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.367634 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:08.867571 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.054546 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:08.554559 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.629995 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:09.129898 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:10.868677 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:12.868979 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:11.055554 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:13.554637 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:11.629511 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:14.129806 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:14.872549 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:17.371705 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:19.868438 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:16.054016 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:18.055476 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:16.629688 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:18.630125 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:21.132102 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:22.367525 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:24.369464 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:20.554660 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:22.556044 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:25.054213 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:23.630061 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:26.132281 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:26.868977 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:29.367384 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:27.055844 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:29.554124 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:28.630474 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:30.631070 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:31.367691 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:33.867941 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:31.555167 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:33.557066 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:32.634599 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:35.131402 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:36.369081 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:38.868497 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:36.054764 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:38.054975 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:37.629895 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:39.630456 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:41.366745 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:43.367883 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:40.554998 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:42.555257 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:42.130638 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:44.629851 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:45.371692 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:47.866965 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:49.868100 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:45.057506 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:47.555247 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:46.632874 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:49.129782 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:51.130176 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:51.868818 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:53.868968 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:50.055939 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:52.556609 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:55.054048 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:53.132556 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:55.632608 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:56.368065 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:58.868076 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:57.054224 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:59.554940 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:58.128545 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:00.129437 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:00.868364 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:03.368093 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:02.054215 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:04.056019 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:02.129706 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:04.130092 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:05.867992 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:07.872121 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:06.554889 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:09.056197 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:06.630974 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:08.632171 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:11.128952 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:10.367536 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:12.369331 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:11.554738 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:13.555681 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:13.129878 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:15.130470 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:14.868630 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:17.367768 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:19.368295 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:16.054391 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:18.054606 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:17.630479 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:19.630971 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:21.873194 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:24.368931 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:20.054866 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:22.554974 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:25.053696 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:22.130831 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:24.630755 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:26.867555 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:28.868612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:27.054706 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:29.055614 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:27.133840 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:29.630572 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:30.868716 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:33.369710 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:31.554882 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:33.556367 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:32.129865 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:34.129987 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:35.870671 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:38.367237 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:35.557755 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:37.559481 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:36.630513 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:39.130271 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:40.368072 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:42.869043 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:40.055427 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:42.554787 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:45.053876 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:41.629178 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:43.630237 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:45.631199 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:44.873439 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:47.367548 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:49.368066 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:47.555106 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:49.556132 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:48.130206 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:50.629041 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:51.369311 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:53.870853 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:52.055511 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:54.061135 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:52.630215 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:55.130153 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:55.873755 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:58.367682 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:56.554861 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:59.054344 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:57.629571 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:59.630560 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:00.372506 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:02.867084 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:01.554332 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:03.554717 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.555955 1483118 pod_ready.go:81] duration metric: took 4m0.009196678s waiting for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:04.555987 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:31:04.555994 1483118 pod_ready.go:38] duration metric: took 4m2.890580557s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:04.556014 1483118 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:31:04.556050 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:04.556152 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:04.615717 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:04.615748 1483118 cri.go:89] found id: ""
	I1225 13:31:04.615759 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:04.615830 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.621669 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:04.621778 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:04.661088 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:04.661127 1483118 cri.go:89] found id: ""
	I1225 13:31:04.661139 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:04.661191 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.666410 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:04.666496 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:04.710927 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:04.710962 1483118 cri.go:89] found id: ""
	I1225 13:31:04.710973 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:04.711041 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.715505 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:04.715587 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:04.761494 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:04.761518 1483118 cri.go:89] found id: ""
	I1225 13:31:04.761527 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:04.761580 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.766925 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:04.767015 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:04.810640 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:04.810670 1483118 cri.go:89] found id: ""
	I1225 13:31:04.810685 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:04.810753 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.815190 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:04.815285 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:04.858275 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:04.858301 1483118 cri.go:89] found id: ""
	I1225 13:31:04.858309 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:04.858362 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.863435 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:04.863529 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:04.914544 1483118 cri.go:89] found id: ""
	I1225 13:31:04.914583 1483118 logs.go:284] 0 containers: []
	W1225 13:31:04.914594 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:04.914603 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:04.914675 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:04.969548 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:04.969577 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:04.969584 1483118 cri.go:89] found id: ""
	I1225 13:31:04.969594 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:04.969660 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.974172 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.978956 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:04.978989 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:05.033590 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:05.033632 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:02.133447 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.630226 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.869025 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:07.368392 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:09.369061 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:05.085851 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:05.085879 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:05.144002 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:05.144047 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:05.191669 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:05.191703 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:05.238581 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:05.238617 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:05.253236 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:05.253271 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:05.293626 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:05.293674 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:05.338584 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:05.338622 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:05.381135 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:05.381172 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:05.886860 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:05.886918 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:06.045040 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:06.045080 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:06.101152 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:06.101192 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:08.662518 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:31:08.678649 1483118 api_server.go:72] duration metric: took 4m14.820531999s to wait for apiserver process to appear ...
	I1225 13:31:08.678687 1483118 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:31:08.678729 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:08.678791 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:08.718202 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:08.718246 1483118 cri.go:89] found id: ""
	I1225 13:31:08.718255 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:08.718305 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.723089 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:08.723177 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:08.772619 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:08.772641 1483118 cri.go:89] found id: ""
	I1225 13:31:08.772649 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:08.772709 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.777577 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:08.777669 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:08.818869 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:08.818900 1483118 cri.go:89] found id: ""
	I1225 13:31:08.818910 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:08.818970 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.823301 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:08.823382 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:08.868885 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:08.868913 1483118 cri.go:89] found id: ""
	I1225 13:31:08.868924 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:08.868982 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.873489 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:08.873562 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:08.916925 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:08.916957 1483118 cri.go:89] found id: ""
	I1225 13:31:08.916967 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:08.917065 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.921808 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:08.921901 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:08.961586 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:08.961617 1483118 cri.go:89] found id: ""
	I1225 13:31:08.961628 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:08.961707 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.965986 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:08.966096 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:09.012223 1483118 cri.go:89] found id: ""
	I1225 13:31:09.012262 1483118 logs.go:284] 0 containers: []
	W1225 13:31:09.012270 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:09.012278 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:09.012343 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:09.060646 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:09.060675 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:09.060683 1483118 cri.go:89] found id: ""
	I1225 13:31:09.060694 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:09.060767 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:09.065955 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:09.070859 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:09.070890 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:09.128056 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:09.128096 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:09.179304 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:09.179341 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:09.194019 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:09.194048 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:09.339697 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:09.339743 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:09.389626 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:09.389669 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:09.831437 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:09.831498 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:09.888799 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:09.888848 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:09.932201 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:09.932232 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:09.983201 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:09.983242 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:10.039094 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:10.039149 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:06.630567 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:09.130605 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:11.369445 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:13.870404 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:10.095628 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:10.095677 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:10.139678 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:10.139717 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:12.688297 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:31:12.693469 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 200:
	ok
	I1225 13:31:12.694766 1483118 api_server.go:141] control plane version: v1.29.0-rc.2
	I1225 13:31:12.694788 1483118 api_server.go:131] duration metric: took 4.016094906s to wait for apiserver health ...
	I1225 13:31:12.694796 1483118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:31:12.694821 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:12.694876 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:12.743143 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:12.743174 1483118 cri.go:89] found id: ""
	I1225 13:31:12.743185 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:12.743238 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.747708 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:12.747803 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:12.800511 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:12.800540 1483118 cri.go:89] found id: ""
	I1225 13:31:12.800549 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:12.800612 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.805236 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:12.805308 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:12.850047 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:12.850081 1483118 cri.go:89] found id: ""
	I1225 13:31:12.850092 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:12.850152 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.854516 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:12.854602 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:12.902131 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:12.902162 1483118 cri.go:89] found id: ""
	I1225 13:31:12.902173 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:12.902239 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.907546 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:12.907634 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:12.966561 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:12.966590 1483118 cri.go:89] found id: ""
	I1225 13:31:12.966601 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:12.966674 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.971071 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:12.971161 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:13.026823 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:13.026851 1483118 cri.go:89] found id: ""
	I1225 13:31:13.026862 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:13.026927 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.031499 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:13.031576 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:13.077486 1483118 cri.go:89] found id: ""
	I1225 13:31:13.077512 1483118 logs.go:284] 0 containers: []
	W1225 13:31:13.077520 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:13.077526 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:13.077589 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:13.130262 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:13.130287 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:13.130294 1483118 cri.go:89] found id: ""
	I1225 13:31:13.130305 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:13.130364 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.138345 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.142749 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:13.142780 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:13.264652 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:13.264694 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:13.315138 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:13.315182 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:13.375532 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:13.375570 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:13.418188 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:13.418226 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:13.433392 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:13.433423 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:13.472447 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:13.472481 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:13.514578 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:13.514631 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:13.568962 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:13.569001 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:13.609819 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:13.609864 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:13.668114 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:13.668160 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:13.710116 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:13.710155 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:14.068484 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:14.068548 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:11.629829 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:13.632277 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:15.629964 1483946 pod_ready.go:81] duration metric: took 4m0.008391697s waiting for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:15.629997 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:31:15.630006 1483946 pod_ready.go:38] duration metric: took 4m4.430454443s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:15.630022 1483946 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:31:15.630052 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:15.630113 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:15.694629 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:15.694654 1483946 cri.go:89] found id: ""
	I1225 13:31:15.694666 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:15.694735 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.699777 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:15.699847 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:15.744267 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:15.744299 1483946 cri.go:89] found id: ""
	I1225 13:31:15.744308 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:15.744361 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.749213 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:15.749310 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:15.796903 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:15.796930 1483946 cri.go:89] found id: ""
	I1225 13:31:15.796939 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:15.797001 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.801601 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:15.801673 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:15.841792 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:15.841820 1483946 cri.go:89] found id: ""
	I1225 13:31:15.841830 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:15.841902 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.845893 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:15.845970 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:15.901462 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:15.901493 1483946 cri.go:89] found id: ""
	I1225 13:31:15.901505 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:15.901589 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.907173 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:15.907264 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:15.957143 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:15.957177 1483946 cri.go:89] found id: ""
	I1225 13:31:15.957186 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:15.957239 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.962715 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:15.962789 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:16.007949 1483946 cri.go:89] found id: ""
	I1225 13:31:16.007988 1483946 logs.go:284] 0 containers: []
	W1225 13:31:16.007999 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:16.008008 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:16.008076 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:16.063958 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:16.063984 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:16.063989 1483946 cri.go:89] found id: ""
	I1225 13:31:16.063997 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:16.064052 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:16.069193 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:16.074310 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:16.074333 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:16.120318 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:16.120363 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:16.176217 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:16.176264 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:16.633470 1483118 system_pods.go:59] 8 kube-system pods found
	I1225 13:31:16.633507 1483118 system_pods.go:61] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running
	I1225 13:31:16.633512 1483118 system_pods.go:61] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running
	I1225 13:31:16.633516 1483118 system_pods.go:61] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running
	I1225 13:31:16.633521 1483118 system_pods.go:61] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running
	I1225 13:31:16.633525 1483118 system_pods.go:61] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running
	I1225 13:31:16.633529 1483118 system_pods.go:61] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running
	I1225 13:31:16.633536 1483118 system_pods.go:61] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:16.633541 1483118 system_pods.go:61] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running
	I1225 13:31:16.633548 1483118 system_pods.go:74] duration metric: took 3.938745899s to wait for pod list to return data ...
	I1225 13:31:16.633556 1483118 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:31:16.637279 1483118 default_sa.go:45] found service account: "default"
	I1225 13:31:16.637314 1483118 default_sa.go:55] duration metric: took 3.749637ms for default service account to be created ...
	I1225 13:31:16.637325 1483118 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:31:16.644466 1483118 system_pods.go:86] 8 kube-system pods found
	I1225 13:31:16.644501 1483118 system_pods.go:89] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running
	I1225 13:31:16.644509 1483118 system_pods.go:89] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running
	I1225 13:31:16.644516 1483118 system_pods.go:89] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running
	I1225 13:31:16.644523 1483118 system_pods.go:89] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running
	I1225 13:31:16.644530 1483118 system_pods.go:89] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running
	I1225 13:31:16.644536 1483118 system_pods.go:89] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running
	I1225 13:31:16.644547 1483118 system_pods.go:89] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:16.644558 1483118 system_pods.go:89] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running
	I1225 13:31:16.644583 1483118 system_pods.go:126] duration metric: took 7.250639ms to wait for k8s-apps to be running ...
	I1225 13:31:16.644594 1483118 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:31:16.644658 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:16.661680 1483118 system_svc.go:56] duration metric: took 17.070893ms WaitForService to wait for kubelet.
	I1225 13:31:16.661723 1483118 kubeadm.go:581] duration metric: took 4m22.80360778s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:31:16.661754 1483118 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:31:16.666189 1483118 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:31:16.666227 1483118 node_conditions.go:123] node cpu capacity is 2
	I1225 13:31:16.666294 1483118 node_conditions.go:105] duration metric: took 4.531137ms to run NodePressure ...
	I1225 13:31:16.666313 1483118 start.go:228] waiting for startup goroutines ...
	I1225 13:31:16.666323 1483118 start.go:233] waiting for cluster config update ...
	I1225 13:31:16.666338 1483118 start.go:242] writing updated cluster config ...
	I1225 13:31:16.666702 1483118 ssh_runner.go:195] Run: rm -f paused
	I1225 13:31:16.729077 1483118 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I1225 13:31:16.732824 1483118 out.go:177] * Done! kubectl is now configured to use "no-preload-330063" cluster and "default" namespace by default
	I1225 13:31:16.368392 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:18.374788 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:16.686611 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:16.686650 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:16.748667 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:16.748705 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:16.937661 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:16.937700 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:16.988870 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:16.988908 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:17.048278 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:17.048316 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:17.095857 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:17.095900 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:17.135425 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:17.135460 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:17.197626 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:17.197670 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:17.213658 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:17.213695 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:17.282101 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:17.282149 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:19.824939 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:31:19.840944 1483946 api_server.go:72] duration metric: took 4m11.866743679s to wait for apiserver process to appear ...
	I1225 13:31:19.840985 1483946 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:31:19.841036 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:19.841114 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:19.895404 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:19.895445 1483946 cri.go:89] found id: ""
	I1225 13:31:19.895455 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:19.895519 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.900604 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:19.900686 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:19.943623 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:19.943652 1483946 cri.go:89] found id: ""
	I1225 13:31:19.943662 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:19.943728 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.948230 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:19.948298 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:19.993271 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:19.993296 1483946 cri.go:89] found id: ""
	I1225 13:31:19.993304 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:19.993355 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.997702 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:19.997790 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:20.043487 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:20.043514 1483946 cri.go:89] found id: ""
	I1225 13:31:20.043525 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:20.043591 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.047665 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:20.047748 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:20.091832 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:20.091867 1483946 cri.go:89] found id: ""
	I1225 13:31:20.091878 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:20.091947 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.096400 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:20.096463 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:20.136753 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:20.136785 1483946 cri.go:89] found id: ""
	I1225 13:31:20.136794 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:20.136867 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.141479 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:20.141559 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:20.184635 1483946 cri.go:89] found id: ""
	I1225 13:31:20.184677 1483946 logs.go:284] 0 containers: []
	W1225 13:31:20.184688 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:20.184694 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:20.184770 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:20.231891 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:20.231918 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:20.231923 1483946 cri.go:89] found id: ""
	I1225 13:31:20.231932 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:20.231991 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.236669 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.240776 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:20.240804 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:20.305411 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:20.305479 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:20.376688 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:20.376729 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:20.419016 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:20.419060 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:20.465253 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:20.465288 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:20.505949 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:20.505994 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:20.565939 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:20.565995 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:20.608765 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:20.608798 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:20.646031 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:20.646076 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:20.694772 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:20.694812 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:20.710038 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:20.710074 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:20.841944 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:20.841996 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:21.267824 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:21.267884 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:20.869158 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:22.870463 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:23.834749 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:31:23.840763 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 200:
	ok
	I1225 13:31:23.842396 1483946 api_server.go:141] control plane version: v1.28.4
	I1225 13:31:23.842424 1483946 api_server.go:131] duration metric: took 4.001431078s to wait for apiserver health ...
	I1225 13:31:23.842451 1483946 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:31:23.842481 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:23.842535 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:23.901377 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:23.901409 1483946 cri.go:89] found id: ""
	I1225 13:31:23.901420 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:23.901489 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:23.906312 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:23.906382 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:23.957073 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:23.957105 1483946 cri.go:89] found id: ""
	I1225 13:31:23.957115 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:23.957175 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:23.961899 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:23.961968 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:24.009529 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:24.009575 1483946 cri.go:89] found id: ""
	I1225 13:31:24.009587 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:24.009656 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.014579 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:24.014668 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:24.059589 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:24.059618 1483946 cri.go:89] found id: ""
	I1225 13:31:24.059629 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:24.059698 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.065185 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:24.065265 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:24.123904 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:24.123932 1483946 cri.go:89] found id: ""
	I1225 13:31:24.123942 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:24.124006 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.128753 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:24.128849 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:24.172259 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:24.172285 1483946 cri.go:89] found id: ""
	I1225 13:31:24.172296 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:24.172363 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.177276 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:24.177356 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:24.223415 1483946 cri.go:89] found id: ""
	I1225 13:31:24.223445 1483946 logs.go:284] 0 containers: []
	W1225 13:31:24.223453 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:24.223459 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:24.223516 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:24.267840 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:24.267866 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:24.267870 1483946 cri.go:89] found id: ""
	I1225 13:31:24.267878 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:24.267939 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.272947 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.279183 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:24.279213 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:24.343548 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:24.343592 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:24.398275 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:24.398312 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:24.443435 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:24.443472 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:24.814711 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:24.814770 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:24.828613 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:24.828649 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:24.979501 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:24.979538 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:25.028976 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:25.029011 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:25.083148 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:25.083191 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:25.155284 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:25.155336 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:25.213437 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:25.213483 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:25.260934 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:25.260973 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:25.307395 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:25.307430 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:27.884673 1483946 system_pods.go:59] 8 kube-system pods found
	I1225 13:31:27.884702 1483946 system_pods.go:61] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running
	I1225 13:31:27.884708 1483946 system_pods.go:61] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running
	I1225 13:31:27.884713 1483946 system_pods.go:61] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running
	I1225 13:31:27.884717 1483946 system_pods.go:61] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running
	I1225 13:31:27.884721 1483946 system_pods.go:61] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running
	I1225 13:31:27.884725 1483946 system_pods.go:61] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running
	I1225 13:31:27.884731 1483946 system_pods.go:61] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:27.884737 1483946 system_pods.go:61] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:31:27.884744 1483946 system_pods.go:74] duration metric: took 4.04228589s to wait for pod list to return data ...
	I1225 13:31:27.884752 1483946 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:31:27.889125 1483946 default_sa.go:45] found service account: "default"
	I1225 13:31:27.889156 1483946 default_sa.go:55] duration metric: took 4.397454ms for default service account to be created ...
	I1225 13:31:27.889167 1483946 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:31:27.896851 1483946 system_pods.go:86] 8 kube-system pods found
	I1225 13:31:27.896879 1483946 system_pods.go:89] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running
	I1225 13:31:27.896884 1483946 system_pods.go:89] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running
	I1225 13:31:27.896889 1483946 system_pods.go:89] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running
	I1225 13:31:27.896894 1483946 system_pods.go:89] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running
	I1225 13:31:27.896898 1483946 system_pods.go:89] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running
	I1225 13:31:27.896901 1483946 system_pods.go:89] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running
	I1225 13:31:27.896908 1483946 system_pods.go:89] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:27.896912 1483946 system_pods.go:89] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:31:27.896920 1483946 system_pods.go:126] duration metric: took 7.747348ms to wait for k8s-apps to be running ...
	I1225 13:31:27.896929 1483946 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:31:27.896981 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:27.917505 1483946 system_svc.go:56] duration metric: took 20.559839ms WaitForService to wait for kubelet.
	I1225 13:31:27.917542 1483946 kubeadm.go:581] duration metric: took 4m19.94335169s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:31:27.917568 1483946 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:31:27.921689 1483946 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:31:27.921715 1483946 node_conditions.go:123] node cpu capacity is 2
	I1225 13:31:27.921797 1483946 node_conditions.go:105] duration metric: took 4.219723ms to run NodePressure ...
	I1225 13:31:27.921814 1483946 start.go:228] waiting for startup goroutines ...
	I1225 13:31:27.921825 1483946 start.go:233] waiting for cluster config update ...
	I1225 13:31:27.921838 1483946 start.go:242] writing updated cluster config ...
	I1225 13:31:27.922130 1483946 ssh_runner.go:195] Run: rm -f paused
	I1225 13:31:27.976011 1483946 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 13:31:27.978077 1483946 out.go:177] * Done! kubectl is now configured to use "embed-certs-880612" cluster and "default" namespace by default
	I1225 13:31:24.870628 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:26.873379 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:29.367512 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:31.367730 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:33.867551 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:36.360292 1484104 pod_ready.go:81] duration metric: took 4m0.000407846s waiting for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:36.360349 1484104 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" (will not retry!)
	I1225 13:31:36.360378 1484104 pod_ready.go:38] duration metric: took 4m12.556234617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:36.360445 1484104 kubeadm.go:640] restartCluster took 4m32.941510355s
	W1225 13:31:36.360540 1484104 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1225 13:31:36.360578 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1225 13:31:50.552320 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.191703988s)
	I1225 13:31:50.552417 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:50.569621 1484104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:31:50.581050 1484104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:31:50.591777 1484104 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:31:50.591837 1484104 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1225 13:31:50.651874 1484104 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1225 13:31:50.651952 1484104 kubeadm.go:322] [preflight] Running pre-flight checks
	I1225 13:31:50.822009 1484104 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 13:31:50.822174 1484104 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 13:31:50.822258 1484104 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1225 13:31:51.074237 1484104 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 13:31:51.077463 1484104 out.go:204]   - Generating certificates and keys ...
	I1225 13:31:51.077575 1484104 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1225 13:31:51.077637 1484104 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1225 13:31:51.077703 1484104 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1225 13:31:51.077755 1484104 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1225 13:31:51.077816 1484104 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1225 13:31:51.077908 1484104 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1225 13:31:51.078059 1484104 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1225 13:31:51.078715 1484104 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1225 13:31:51.079408 1484104 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1225 13:31:51.080169 1484104 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1225 13:31:51.080635 1484104 kubeadm.go:322] [certs] Using the existing "sa" key
	I1225 13:31:51.080724 1484104 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 13:31:51.147373 1484104 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 13:31:51.298473 1484104 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 13:31:51.403869 1484104 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 13:31:51.719828 1484104 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 13:31:51.720523 1484104 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 13:31:51.725276 1484104 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 13:31:51.727100 1484104 out.go:204]   - Booting up control plane ...
	I1225 13:31:51.727248 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 13:31:51.727343 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 13:31:51.727431 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 13:31:51.745500 1484104 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 13:31:51.746331 1484104 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 13:31:51.746392 1484104 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1225 13:31:51.897052 1484104 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1225 13:32:00.401261 1484104 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504339 seconds
	I1225 13:32:00.401463 1484104 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 13:32:00.422010 1484104 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 13:32:00.962174 1484104 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 13:32:00.962418 1484104 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-344803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 13:32:01.479956 1484104 kubeadm.go:322] [bootstrap-token] Using token: 7n7qlp.3wejtqrgqunjtf8y
	I1225 13:32:01.481699 1484104 out.go:204]   - Configuring RBAC rules ...
	I1225 13:32:01.481862 1484104 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 13:32:01.489709 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 13:32:01.499287 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 13:32:01.504520 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 13:32:01.508950 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 13:32:01.517277 1484104 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 13:32:01.537420 1484104 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 13:32:01.820439 1484104 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1225 13:32:01.897010 1484104 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1225 13:32:01.897039 1484104 kubeadm.go:322] 
	I1225 13:32:01.897139 1484104 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1225 13:32:01.897169 1484104 kubeadm.go:322] 
	I1225 13:32:01.897259 1484104 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1225 13:32:01.897270 1484104 kubeadm.go:322] 
	I1225 13:32:01.897292 1484104 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1225 13:32:01.897383 1484104 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 13:32:01.897471 1484104 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 13:32:01.897484 1484104 kubeadm.go:322] 
	I1225 13:32:01.897558 1484104 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1225 13:32:01.897568 1484104 kubeadm.go:322] 
	I1225 13:32:01.897621 1484104 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 13:32:01.897629 1484104 kubeadm.go:322] 
	I1225 13:32:01.897702 1484104 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1225 13:32:01.897822 1484104 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 13:32:01.897923 1484104 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 13:32:01.897935 1484104 kubeadm.go:322] 
	I1225 13:32:01.898040 1484104 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 13:32:01.898141 1484104 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1225 13:32:01.898156 1484104 kubeadm.go:322] 
	I1225 13:32:01.898264 1484104 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 7n7qlp.3wejtqrgqunjtf8y \
	I1225 13:32:01.898455 1484104 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 \
	I1225 13:32:01.898506 1484104 kubeadm.go:322] 	--control-plane 
	I1225 13:32:01.898516 1484104 kubeadm.go:322] 
	I1225 13:32:01.898627 1484104 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1225 13:32:01.898645 1484104 kubeadm.go:322] 
	I1225 13:32:01.898760 1484104 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 7n7qlp.3wejtqrgqunjtf8y \
	I1225 13:32:01.898898 1484104 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 13:32:01.899552 1484104 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 13:32:01.899699 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:32:01.899720 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:32:01.902817 1484104 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:32:01.904375 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:32:01.943752 1484104 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:32:02.004751 1484104 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:32:02.004915 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da minikube.k8s.io/name=default-k8s-diff-port-344803 minikube.k8s.io/updated_at=2023_12_25T13_32_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.004920 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.377800 1484104 ops.go:34] apiserver oom_adj: -16
	I1225 13:32:02.378388 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.879083 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:03.379453 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:03.878676 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:04.378589 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:04.878630 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:05.378615 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:05.879009 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:06.379100 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:06.878610 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:07.378604 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:07.878597 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:08.379427 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:08.878637 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:09.378638 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:09.879200 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:10.378659 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:10.879285 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:11.378603 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:11.878605 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:12.379451 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:12.879431 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:13.379034 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:13.878468 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:14.378592 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:14.878569 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:15.008581 1484104 kubeadm.go:1088] duration metric: took 13.00372954s to wait for elevateKubeSystemPrivileges.
	I1225 13:32:15.008626 1484104 kubeadm.go:406] StartCluster complete in 5m11.652335467s
	I1225 13:32:15.008653 1484104 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:32:15.008763 1484104 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:32:15.011655 1484104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:32:15.011982 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:32:15.012172 1484104 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:32:15.012258 1484104 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012285 1484104 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.012297 1484104 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:32:15.012311 1484104 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012347 1484104 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-344803"
	I1225 13:32:15.012363 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.012798 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.012800 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.012831 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.012833 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.012898 1484104 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012912 1484104 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.012919 1484104 addons.go:246] addon metrics-server should already be in state true
	I1225 13:32:15.012961 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.012972 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:32:15.013289 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.013318 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.032424 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I1225 13:32:15.032981 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44439
	I1225 13:32:15.033180 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1225 13:32:15.033455 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.033575 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.033623 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.034052 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034069 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034173 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034195 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034209 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034238 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034412 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034635 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034693 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034728 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.036190 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.036205 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.036228 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.036229 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.040383 1484104 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.040442 1484104 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:32:15.040473 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.040780 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.040820 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.055366 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
	I1225 13:32:15.055979 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.056596 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.056623 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.056646 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I1225 13:32:15.056646 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41689
	I1225 13:32:15.057067 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.057205 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.057218 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.057413 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.057741 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.057768 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.057958 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.058013 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.058122 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.058413 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.058776 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.058816 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.059142 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.059588 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.061854 1484104 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:32:15.060849 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.063569 1484104 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:32:15.063593 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:32:15.065174 1484104 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:32:15.063622 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.066654 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:32:15.066677 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:32:15.066700 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.071209 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.071244 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.071995 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.072039 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.072074 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.072089 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.072244 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.072319 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.072500 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.072558 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.072875 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.072941 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.073085 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.073138 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.077927 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38519
	I1225 13:32:15.078428 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.079241 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.079262 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.079775 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.079983 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.081656 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.082002 1484104 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:32:15.082024 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:32:15.082047 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.085367 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.085779 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.085805 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.086119 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.086390 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.086656 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.086875 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.262443 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:32:15.262470 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:32:15.270730 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 13:32:15.285178 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:32:15.302070 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:32:15.302097 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:32:15.303686 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:32:15.373021 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:32:15.373054 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:32:15.461862 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:32:15.518928 1484104 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-344803" context rescaled to 1 replicas
	I1225 13:32:15.518973 1484104 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:32:15.520858 1484104 out.go:177] * Verifying Kubernetes components...
	I1225 13:32:15.522326 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:32:16.993620 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.72284687s)
	I1225 13:32:16.993667 1484104 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1225 13:32:17.329206 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.025471574s)
	I1225 13:32:17.329305 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329321 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329352 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.044135646s)
	I1225 13:32:17.329411 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329430 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329697 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.329722 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.329737 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329747 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.329764 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329740 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.329805 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.329825 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329838 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.331647 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.331675 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.331706 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.331715 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.331734 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.331766 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.350031 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.350068 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.350458 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.350499 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.350516 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.582723 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.120815372s)
	I1225 13:32:17.582785 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.582798 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.582787 1484104 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.060422325s)
	I1225 13:32:17.582838 1484104 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-344803" to be "Ready" ...
	I1225 13:32:17.583145 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.583172 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.583179 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.583192 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.583201 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.583438 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.583461 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.583471 1484104 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-344803"
	I1225 13:32:17.585288 1484104 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:32:17.586537 1484104 addons.go:508] enable addons completed in 2.574365441s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:32:17.595130 1484104 node_ready.go:49] node "default-k8s-diff-port-344803" has status "Ready":"True"
	I1225 13:32:17.595165 1484104 node_ready.go:38] duration metric: took 12.307997ms waiting for node "default-k8s-diff-port-344803" to be "Ready" ...
	I1225 13:32:17.595181 1484104 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:32:17.613099 1484104 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:19.621252 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:20.621494 1484104 pod_ready.go:92] pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.621519 1484104 pod_ready.go:81] duration metric: took 3.008379569s waiting for pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.621528 1484104 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.630348 1484104 pod_ready.go:92] pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.630375 1484104 pod_ready.go:81] duration metric: took 8.841316ms waiting for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.630387 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.636928 1484104 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.636953 1484104 pod_ready.go:81] duration metric: took 6.558203ms waiting for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.636963 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.643335 1484104 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.643360 1484104 pod_ready.go:81] duration metric: took 6.390339ms waiting for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.643369 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpk9s" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.649496 1484104 pod_ready.go:92] pod "kube-proxy-fpk9s" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.649526 1484104 pod_ready.go:81] duration metric: took 6.150243ms waiting for pod "kube-proxy-fpk9s" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.649535 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:21.018065 1484104 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:21.018092 1484104 pod_ready.go:81] duration metric: took 368.549291ms waiting for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:21.018102 1484104 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:23.026953 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:25.525822 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:27.530780 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:30.033601 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:32.528694 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:34.529208 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:37.028717 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:39.526632 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:42.026868 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:44.028002 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:46.526534 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:48.529899 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:51.026062 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:53.525655 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:55.526096 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:58.026355 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:00.026674 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:02.029299 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:04.526609 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:06.526810 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:09.026498 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:11.026612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:13.029416 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:15.526242 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:18.026664 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:20.529125 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:23.026694 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:25.029350 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:27.527537 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:30.030562 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:32.526381 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:34.526801 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:37.027939 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:39.526249 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:41.526511 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:43.526783 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:45.527693 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:48.026703 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:50.027582 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:52.526290 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:55.027458 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:57.526559 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:59.526699 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:01.527938 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:03.529353 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:06.025942 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:08.027340 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:10.028087 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:12.525688 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:14.527122 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:16.529380 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:19.026128 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:21.026183 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:23.027208 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:25.526282 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:27.531847 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:30.030025 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:32.526291 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:34.526470 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:36.527179 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:39.026270 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:41.029609 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:43.528905 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:46.026666 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:48.528560 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:51.025864 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:53.027211 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:55.527359 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:58.025696 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:00.027368 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:02.027605 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:04.525836 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:06.526571 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:08.528550 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:11.026765 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:13.028215 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:15.525903 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:17.527102 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:20.026011 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:22.525873 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:24.528380 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:27.026402 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:29.527869 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:32.026671 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:34.026737 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:36.026836 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:38.526788 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:41.027387 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:43.526936 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:46.026316 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:48.026940 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:50.526565 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:53.025988 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:55.027146 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:57.527287 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:00.028971 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:02.526704 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:05.025995 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:07.026612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:09.027839 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:11.526845 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:13.527737 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:16.026967 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:18.028747 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:20.527437 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:21.027372 1484104 pod_ready.go:81] duration metric: took 4m0.009244403s waiting for pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace to be "Ready" ...
	E1225 13:36:21.027405 1484104 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:36:21.027418 1484104 pod_ready.go:38] duration metric: took 4m3.432224558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:36:21.027474 1484104 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:36:21.027560 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:21.027806 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:21.090421 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:21.090464 1484104 cri.go:89] found id: ""
	I1225 13:36:21.090474 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:21.090526 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.095523 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:21.095605 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:21.139092 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:21.139126 1484104 cri.go:89] found id: ""
	I1225 13:36:21.139136 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:21.139206 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.143957 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:21.144038 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:21.190905 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:21.190937 1484104 cri.go:89] found id: ""
	I1225 13:36:21.190948 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:21.191018 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.195814 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:21.195882 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:21.240274 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:21.240307 1484104 cri.go:89] found id: ""
	I1225 13:36:21.240317 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:21.240384 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.244831 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:21.244930 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:21.289367 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:21.289399 1484104 cri.go:89] found id: ""
	I1225 13:36:21.289410 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:21.289478 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.293796 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:21.293878 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:21.338757 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:21.338789 1484104 cri.go:89] found id: ""
	I1225 13:36:21.338808 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:21.338878 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.343145 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:21.343217 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:21.384898 1484104 cri.go:89] found id: ""
	I1225 13:36:21.384929 1484104 logs.go:284] 0 containers: []
	W1225 13:36:21.384936 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:21.384943 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:21.385006 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:21.436776 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:21.436809 1484104 cri.go:89] found id: ""
	I1225 13:36:21.436818 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:21.436871 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.442173 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:21.442210 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:21.886890 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:21.886944 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:21.971380 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:21.971568 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:21.992672 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:21.992724 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:22.015144 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:22.015198 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:22.195011 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:22.195060 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:22.237377 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:22.237423 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:22.284207 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:22.284240 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:22.343882 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:22.343939 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:22.404320 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:22.404356 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:22.465126 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:22.465175 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:22.521920 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:22.521963 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:22.575563 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:22.575601 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:22.627508 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:22.627549 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:22.627808 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:22.627849 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:22.627862 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:22.627871 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:22.627882 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:32.629903 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:36:32.648435 1484104 api_server.go:72] duration metric: took 4m17.129427556s to wait for apiserver process to appear ...
	I1225 13:36:32.648461 1484104 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:36:32.648499 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:32.648567 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:32.705637 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:32.705673 1484104 cri.go:89] found id: ""
	I1225 13:36:32.705685 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:32.705754 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.710516 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:32.710591 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:32.757193 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:32.757225 1484104 cri.go:89] found id: ""
	I1225 13:36:32.757236 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:32.757302 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.762255 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:32.762335 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:32.812666 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:32.812692 1484104 cri.go:89] found id: ""
	I1225 13:36:32.812703 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:32.812758 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.817599 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:32.817676 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:32.861969 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:32.862011 1484104 cri.go:89] found id: ""
	I1225 13:36:32.862021 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:32.862084 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.868439 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:32.868525 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:32.929969 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:32.930006 1484104 cri.go:89] found id: ""
	I1225 13:36:32.930015 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:32.930077 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.936071 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:32.936149 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:32.980256 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:32.980280 1484104 cri.go:89] found id: ""
	I1225 13:36:32.980288 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:32.980345 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.985508 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:32.985605 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:33.029393 1484104 cri.go:89] found id: ""
	I1225 13:36:33.029429 1484104 logs.go:284] 0 containers: []
	W1225 13:36:33.029440 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:33.029448 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:33.029521 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:33.075129 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:33.075156 1484104 cri.go:89] found id: ""
	I1225 13:36:33.075167 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:33.075229 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:33.079900 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:33.079940 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:33.121355 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:33.121391 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:33.205175 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:33.205394 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:33.225359 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:33.225393 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:33.282658 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:33.282710 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:33.334586 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:33.334627 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:33.383538 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:33.383576 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:33.438245 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:33.438284 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:33.487260 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:33.487305 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:33.504627 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:33.504665 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:33.641875 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:33.641912 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:33.692275 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:33.692311 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:33.731932 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:33.731971 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:34.081286 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:34.081325 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:34.081438 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:34.081456 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:34.081465 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:34.081477 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:34.081490 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:44.083633 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:36:44.091721 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 200:
	ok
	I1225 13:36:44.093215 1484104 api_server.go:141] control plane version: v1.28.4
	I1225 13:36:44.093242 1484104 api_server.go:131] duration metric: took 11.444775391s to wait for apiserver health ...
	I1225 13:36:44.093251 1484104 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:36:44.093279 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:44.093330 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:44.135179 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:44.135212 1484104 cri.go:89] found id: ""
	I1225 13:36:44.135229 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:44.135308 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.140367 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:44.140455 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:44.179525 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:44.179557 1484104 cri.go:89] found id: ""
	I1225 13:36:44.179568 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:44.179644 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.184724 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:44.184822 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:44.225306 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:44.225339 1484104 cri.go:89] found id: ""
	I1225 13:36:44.225351 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:44.225418 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.230354 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:44.230459 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:44.272270 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:44.272300 1484104 cri.go:89] found id: ""
	I1225 13:36:44.272311 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:44.272387 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.277110 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:44.277187 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:44.326495 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:44.326519 1484104 cri.go:89] found id: ""
	I1225 13:36:44.326527 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:44.326579 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.333707 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:44.333799 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:44.380378 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:44.380410 1484104 cri.go:89] found id: ""
	I1225 13:36:44.380423 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:44.380488 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.390075 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:44.390171 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:44.440171 1484104 cri.go:89] found id: ""
	I1225 13:36:44.440211 1484104 logs.go:284] 0 containers: []
	W1225 13:36:44.440223 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:44.440233 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:44.440321 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:44.482074 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:44.482104 1484104 cri.go:89] found id: ""
	I1225 13:36:44.482114 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:44.482178 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.487171 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:44.487209 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:44.532144 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:44.532179 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:44.891521 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:44.891568 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:44.938934 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:44.938967 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:45.017433 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:45.017627 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:45.039058 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:45.039097 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:45.054560 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:45.054592 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:45.113698 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:45.113735 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:45.158302 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:45.158342 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:45.204784 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:45.204824 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:45.276442 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:45.276483 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:45.320645 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:45.320678 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:45.452638 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:45.452681 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:45.500718 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:45.500757 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:45.500817 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:45.500833 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:45.500844 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:45.500853 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:45.500859 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:55.510930 1484104 system_pods.go:59] 8 kube-system pods found
	I1225 13:36:55.510962 1484104 system_pods.go:61] "coredns-5dd5756b68-rbmbs" [cd5fc3c3-b9db-437d-8088-2f97921bc3bd] Running
	I1225 13:36:55.510968 1484104 system_pods.go:61] "etcd-default-k8s-diff-port-344803" [3824f946-c4e1-4e9c-a52f-3d6753ce9350] Running
	I1225 13:36:55.510973 1484104 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-344803" [81cf9f5a-6cc3-4d66-956f-6b8a4e2a1ef5] Running
	I1225 13:36:55.510977 1484104 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-344803" [b3cfc8b9-d03b-4a1e-9500-08bb08dc64f3] Running
	I1225 13:36:55.510984 1484104 system_pods.go:61] "kube-proxy-fpk9s" [17d80ffc-e149-4449-aec9-9d90a2fda282] Running
	I1225 13:36:55.510987 1484104 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-344803" [795b56ad-2ee1-45ef-8c7b-1b878be6b0d7] Running
	I1225 13:36:55.510995 1484104 system_pods.go:61] "metrics-server-57f55c9bc5-slv7p" [a51c534d-e6d8-48b9-852f-caf598c8853a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:36:55.510999 1484104 system_pods.go:61] "storage-provisioner" [4bee5e6e-1252-4b3d-8d6c-73515d8567e4] Running
	I1225 13:36:55.511014 1484104 system_pods.go:74] duration metric: took 11.417757674s to wait for pod list to return data ...
	I1225 13:36:55.511025 1484104 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:36:55.514087 1484104 default_sa.go:45] found service account: "default"
	I1225 13:36:55.514112 1484104 default_sa.go:55] duration metric: took 3.081452ms for default service account to be created ...
	I1225 13:36:55.514120 1484104 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:36:55.521321 1484104 system_pods.go:86] 8 kube-system pods found
	I1225 13:36:55.521355 1484104 system_pods.go:89] "coredns-5dd5756b68-rbmbs" [cd5fc3c3-b9db-437d-8088-2f97921bc3bd] Running
	I1225 13:36:55.521365 1484104 system_pods.go:89] "etcd-default-k8s-diff-port-344803" [3824f946-c4e1-4e9c-a52f-3d6753ce9350] Running
	I1225 13:36:55.521370 1484104 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-344803" [81cf9f5a-6cc3-4d66-956f-6b8a4e2a1ef5] Running
	I1225 13:36:55.521375 1484104 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-344803" [b3cfc8b9-d03b-4a1e-9500-08bb08dc64f3] Running
	I1225 13:36:55.521380 1484104 system_pods.go:89] "kube-proxy-fpk9s" [17d80ffc-e149-4449-aec9-9d90a2fda282] Running
	I1225 13:36:55.521387 1484104 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-344803" [795b56ad-2ee1-45ef-8c7b-1b878be6b0d7] Running
	I1225 13:36:55.521397 1484104 system_pods.go:89] "metrics-server-57f55c9bc5-slv7p" [a51c534d-e6d8-48b9-852f-caf598c8853a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:36:55.521409 1484104 system_pods.go:89] "storage-provisioner" [4bee5e6e-1252-4b3d-8d6c-73515d8567e4] Running
	I1225 13:36:55.521421 1484104 system_pods.go:126] duration metric: took 7.294824ms to wait for k8s-apps to be running ...
	I1225 13:36:55.521433 1484104 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:36:55.521492 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:36:55.540217 1484104 system_svc.go:56] duration metric: took 18.766893ms WaitForService to wait for kubelet.
	I1225 13:36:55.540248 1484104 kubeadm.go:581] duration metric: took 4m40.021246946s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:36:55.540271 1484104 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:36:55.544519 1484104 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:36:55.544685 1484104 node_conditions.go:123] node cpu capacity is 2
	I1225 13:36:55.544742 1484104 node_conditions.go:105] duration metric: took 4.463666ms to run NodePressure ...
	I1225 13:36:55.544783 1484104 start.go:228] waiting for startup goroutines ...
	I1225 13:36:55.544795 1484104 start.go:233] waiting for cluster config update ...
	I1225 13:36:55.544810 1484104 start.go:242] writing updated cluster config ...
	I1225 13:36:55.545268 1484104 ssh_runner.go:195] Run: rm -f paused
	I1225 13:36:55.607984 1484104 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 13:36:55.609993 1484104 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-344803" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2023-12-25 13:26:47 UTC, ends at Mon 2023-12-25 13:45:58 UTC. --
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.051373876Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d9c7957bb4ca05cd792cbe341c6e150fb14235c38f384ab790a5a7793124dbdd,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-rbmbs,Uid:cd5fc3c3-b9db-437d-8088-2f97921bc3bd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703511138456758799,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-rbmbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5fc3c3-b9db-437d-8088-2f97921bc3bd,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T13:32:16.613792783Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3aaa2da62529e315024d9167e2eaa92e8a14f459ad3ed53ade57e9fcb5fdf91,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-slv7p,Uid:a51c534d-e6d8-48b9-852f-caf598c8853a
,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703511137803065882,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-slv7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51c534d-e6d8-48b9-852f-caf598c8853a,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T13:32:17.457592092Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:503e06ebad5c6da718ca5ba4ec8e29eeaf998c369c77b2e1e4530a8c2ddd66f7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4bee5e6e-1252-4b3d-8d6c-73515d8567e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703511137685048615,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bee5e6e-1252-4b3d-
8d6c-73515d8567e4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-25T13:32:17.345586820Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9c7da8fea5ac3926cb08a46632877e4c34dac5fec5ee662ad1b17a3c28f02278,Metadata:&PodSandboxMetadata{Name:kube-proxy-fpk9s,Uid:17d80ffc-e149-44
49-aec9-9d90a2fda282,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703511135088804020,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fpk9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d80ffc-e149-4449-aec9-9d90a2fda282,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-25T13:32:14.731324767Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:25e3b9339d0ba517f676e988826d242007f921073cab46a69b40994baf0c2937,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-344803,Uid:b89558a0ee692b5245a29c7aab9ef729,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703511112836534989,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b89558a0ee692b5245a29c7aab9ef729,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b89558a0ee692b5245a29c7aab9ef729,kubernetes.io/config.seen: 2023-12-25T13:31:52.320121167Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:26dff8002b28995298b9ebfda1cdeba46e5bce63389fdf2934b8f6a9604e844f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-344803,Uid:77930059fbde809ec88a6de735f03c86,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703511112827868589,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77930059fbde809ec88a6de735f03c86,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.39:8444,kubernetes.io/config.hash: 77930059fbde809ec88a6de735f03c86,kubernetes.io/config.seen: 2023-12-25
T13:31:52.320126478Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b1248b21fb07a5ef19ab976d8766c2e8fccb3fdad02fb708b5e3b58698d95c65,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-344803,Uid:407e2c1ffda0cd91d0675f36c34b3336,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703511112811832453,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 407e2c1ffda0cd91d0675f36c34b3336,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 407e2c1ffda0cd91d0675f36c34b3336,kubernetes.io/config.seen: 2023-12-25T13:31:52.320127807Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e8f110c9e64aecfa3b772d71cb50a6ad6fbbb5167f97de00eaca86dba8fdb988,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-344803,Uid:68b7e97da25bd859e
90fc4d0314838a3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1703511112773615057,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b7e97da25bd859e90fc4d0314838a3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.39:2379,kubernetes.io/config.hash: 68b7e97da25bd859e90fc4d0314838a3,kubernetes.io/config.seen: 2023-12-25T13:31:52.320125209Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=29bed48b-907d-4fcf-8e12-81d6b3a088b0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.052721133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3c7a3025-8297-4ed1-8345-3b03c91d6a11 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.052777507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3c7a3025-8297-4ed1-8345-3b03c91d6a11 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.052960052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd,PodSandboxId:d9c7957bb4ca05cd792cbe341c6e150fb14235c38f384ab790a5a7793124dbdd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703511139216879951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rbmbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5fc3c3-b9db-437d-8088-2f97921bc3bd,},Annotations:map[string]string{io.kubernetes.container.hash: f747fa4c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8,PodSandboxId:503e06ebad5c6da718ca5ba4ec8e29eeaf998c369c77b2e1e4530a8c2ddd66f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703511138272988202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bee5e6e-1252-4b3d-8d6c-73515d8567e4,},Annotations:map[string]string{io.kubernetes.container.hash: d8899048,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3,PodSandboxId:9c7da8fea5ac3926cb08a46632877e4c34dac5fec5ee662ad1b17a3c28f02278,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703511136076759995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpk9s,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 17d80ffc-e149-4449-aec9-9d90a2fda282,},Annotations:map[string]string{io.kubernetes.container.hash: 3f77eaca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f,PodSandboxId:e8f110c9e64aecfa3b772d71cb50a6ad6fbbb5167f97de00eaca86dba8fdb988,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703511113805422123,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b7e97da25bd859e
90fc4d0314838a3,},Annotations:map[string]string{io.kubernetes.container.hash: d4ad95f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13,PodSandboxId:25e3b9339d0ba517f676e988826d242007f921073cab46a69b40994baf0c2937,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703511113630520912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b89558a0ee692b524
5a29c7aab9ef729,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2,PodSandboxId:b1248b21fb07a5ef19ab976d8766c2e8fccb3fdad02fb708b5e3b58698d95c65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703511113604668279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 407e2c1ffda0cd91d0675f36c34b3336,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca,PodSandboxId:26dff8002b28995298b9ebfda1cdeba46e5bce63389fdf2934b8f6a9604e844f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703511113472392376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 77930059fbde809ec88a6de735f03c86,},Annotations:map[string]string{io.kubernetes.container.hash: 8951b72a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3c7a3025-8297-4ed1-8345-3b03c91d6a11 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.095741189Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0f99de33-43f1-4b5e-aa87-3c6a091448a9 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.095811526Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0f99de33-43f1-4b5e-aa87-3c6a091448a9 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.097939071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7eebaccc-74cb-4762-aa73-84dfb145ef29 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.098415683Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511958098396821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7eebaccc-74cb-4762-aa73-84dfb145ef29 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.099359615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=34e4345d-f10a-479a-8329-bdb99cd69cd0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.099409643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=34e4345d-f10a-479a-8329-bdb99cd69cd0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.099576460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd,PodSandboxId:d9c7957bb4ca05cd792cbe341c6e150fb14235c38f384ab790a5a7793124dbdd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703511139216879951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rbmbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5fc3c3-b9db-437d-8088-2f97921bc3bd,},Annotations:map[string]string{io.kubernetes.container.hash: f747fa4c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8,PodSandboxId:503e06ebad5c6da718ca5ba4ec8e29eeaf998c369c77b2e1e4530a8c2ddd66f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703511138272988202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bee5e6e-1252-4b3d-8d6c-73515d8567e4,},Annotations:map[string]string{io.kubernetes.container.hash: d8899048,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3,PodSandboxId:9c7da8fea5ac3926cb08a46632877e4c34dac5fec5ee662ad1b17a3c28f02278,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703511136076759995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpk9s,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 17d80ffc-e149-4449-aec9-9d90a2fda282,},Annotations:map[string]string{io.kubernetes.container.hash: 3f77eaca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f,PodSandboxId:e8f110c9e64aecfa3b772d71cb50a6ad6fbbb5167f97de00eaca86dba8fdb988,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703511113805422123,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b7e97da25bd859e
90fc4d0314838a3,},Annotations:map[string]string{io.kubernetes.container.hash: d4ad95f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13,PodSandboxId:25e3b9339d0ba517f676e988826d242007f921073cab46a69b40994baf0c2937,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703511113630520912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b89558a0ee692b524
5a29c7aab9ef729,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2,PodSandboxId:b1248b21fb07a5ef19ab976d8766c2e8fccb3fdad02fb708b5e3b58698d95c65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703511113604668279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 407e2c1ffda0cd91d0675f36c34b3336,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca,PodSandboxId:26dff8002b28995298b9ebfda1cdeba46e5bce63389fdf2934b8f6a9604e844f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703511113472392376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 77930059fbde809ec88a6de735f03c86,},Annotations:map[string]string{io.kubernetes.container.hash: 8951b72a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=34e4345d-f10a-479a-8329-bdb99cd69cd0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.145914058Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4c76071b-5e51-46b7-b475-9d2641ff388c name=/runtime.v1.RuntimeService/Version
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.145976484Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4c76071b-5e51-46b7-b475-9d2641ff388c name=/runtime.v1.RuntimeService/Version
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.147781341Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f0f4cae0-ebdf-4bb8-9385-1eb4a98d75d5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.148140581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511958148128708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f0f4cae0-ebdf-4bb8-9385-1eb4a98d75d5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.148838493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=54f0b4cb-5dd1-4bb5-9ad5-3ba7b5f40747 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.148886184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=54f0b4cb-5dd1-4bb5-9ad5-3ba7b5f40747 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.150018959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd,PodSandboxId:d9c7957bb4ca05cd792cbe341c6e150fb14235c38f384ab790a5a7793124dbdd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703511139216879951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rbmbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5fc3c3-b9db-437d-8088-2f97921bc3bd,},Annotations:map[string]string{io.kubernetes.container.hash: f747fa4c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8,PodSandboxId:503e06ebad5c6da718ca5ba4ec8e29eeaf998c369c77b2e1e4530a8c2ddd66f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703511138272988202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bee5e6e-1252-4b3d-8d6c-73515d8567e4,},Annotations:map[string]string{io.kubernetes.container.hash: d8899048,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3,PodSandboxId:9c7da8fea5ac3926cb08a46632877e4c34dac5fec5ee662ad1b17a3c28f02278,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703511136076759995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpk9s,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 17d80ffc-e149-4449-aec9-9d90a2fda282,},Annotations:map[string]string{io.kubernetes.container.hash: 3f77eaca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f,PodSandboxId:e8f110c9e64aecfa3b772d71cb50a6ad6fbbb5167f97de00eaca86dba8fdb988,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703511113805422123,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b7e97da25bd859e
90fc4d0314838a3,},Annotations:map[string]string{io.kubernetes.container.hash: d4ad95f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13,PodSandboxId:25e3b9339d0ba517f676e988826d242007f921073cab46a69b40994baf0c2937,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703511113630520912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b89558a0ee692b524
5a29c7aab9ef729,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2,PodSandboxId:b1248b21fb07a5ef19ab976d8766c2e8fccb3fdad02fb708b5e3b58698d95c65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703511113604668279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 407e2c1ffda0cd91d0675f36c34b3336,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca,PodSandboxId:26dff8002b28995298b9ebfda1cdeba46e5bce63389fdf2934b8f6a9604e844f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703511113472392376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 77930059fbde809ec88a6de735f03c86,},Annotations:map[string]string{io.kubernetes.container.hash: 8951b72a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=54f0b4cb-5dd1-4bb5-9ad5-3ba7b5f40747 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.193121802Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2355c03a-8833-4c41-aacc-916d81f35a17 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.193272551Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2355c03a-8833-4c41-aacc-916d81f35a17 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.194419373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=039de1ed-3ea8-4ce6-bbf5-a445919e23f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.194812453Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511958194798535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=039de1ed-3ea8-4ce6-bbf5-a445919e23f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.195713071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=10106d0d-d2c6-4170-89de-e461158b7563 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.195790337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=10106d0d-d2c6-4170-89de-e461158b7563 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:58 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:45:58.195967440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd,PodSandboxId:d9c7957bb4ca05cd792cbe341c6e150fb14235c38f384ab790a5a7793124dbdd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703511139216879951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rbmbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5fc3c3-b9db-437d-8088-2f97921bc3bd,},Annotations:map[string]string{io.kubernetes.container.hash: f747fa4c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8,PodSandboxId:503e06ebad5c6da718ca5ba4ec8e29eeaf998c369c77b2e1e4530a8c2ddd66f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703511138272988202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bee5e6e-1252-4b3d-8d6c-73515d8567e4,},Annotations:map[string]string{io.kubernetes.container.hash: d8899048,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3,PodSandboxId:9c7da8fea5ac3926cb08a46632877e4c34dac5fec5ee662ad1b17a3c28f02278,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703511136076759995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpk9s,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 17d80ffc-e149-4449-aec9-9d90a2fda282,},Annotations:map[string]string{io.kubernetes.container.hash: 3f77eaca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f,PodSandboxId:e8f110c9e64aecfa3b772d71cb50a6ad6fbbb5167f97de00eaca86dba8fdb988,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703511113805422123,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b7e97da25bd859e
90fc4d0314838a3,},Annotations:map[string]string{io.kubernetes.container.hash: d4ad95f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13,PodSandboxId:25e3b9339d0ba517f676e988826d242007f921073cab46a69b40994baf0c2937,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703511113630520912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b89558a0ee692b524
5a29c7aab9ef729,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2,PodSandboxId:b1248b21fb07a5ef19ab976d8766c2e8fccb3fdad02fb708b5e3b58698d95c65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703511113604668279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 407e2c1ffda0cd91d0675f36c34b3336,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca,PodSandboxId:26dff8002b28995298b9ebfda1cdeba46e5bce63389fdf2934b8f6a9604e844f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703511113472392376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 77930059fbde809ec88a6de735f03c86,},Annotations:map[string]string{io.kubernetes.container.hash: 8951b72a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=10106d0d-d2c6-4170-89de-e461158b7563 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	667f9290ab9fd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   d9c7957bb4ca0       coredns-5dd5756b68-rbmbs
	2752dc28afbf4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   503e06ebad5c6       storage-provisioner
	09edd8162e2b7       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   9c7da8fea5ac3       kube-proxy-fpk9s
	94e27fadf048b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   e8f110c9e64ae       etcd-default-k8s-diff-port-344803
	935f1c4836b96       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago      Running             kube-scheduler            2                   25e3b9339d0ba       kube-scheduler-default-k8s-diff-port-344803
	3670e177c122b       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   14 minutes ago      Running             kube-controller-manager   2                   b1248b21fb07a       kube-controller-manager-default-k8s-diff-port-344803
	3e5f34c8c4093       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Running             kube-apiserver            2                   26dff8002b289       kube-apiserver-default-k8s-diff-port-344803
	
	
	==> coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37798 - 47913 "HINFO IN 8929664785579530971.855764544156376687. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009378709s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-344803
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-344803
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=default-k8s-diff-port-344803
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_25T13_32_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 13:31:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-344803
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Dec 2023 13:45:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 13:42:34 +0000   Mon, 25 Dec 2023 13:31:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 13:42:34 +0000   Mon, 25 Dec 2023 13:31:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 13:42:34 +0000   Mon, 25 Dec 2023 13:31:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 13:42:34 +0000   Mon, 25 Dec 2023 13:32:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.39
	  Hostname:    default-k8s-diff-port-344803
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9137c9b00b9640de913c0f6607cb361e
	  System UUID:                9137c9b0-0b96-40de-913c-0f6607cb361e
	  Boot ID:                    d79c15c2-2217-406f-8530-049b2957669c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-rbmbs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-default-k8s-diff-port-344803                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-344803             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-344803    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-fpk9s                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-344803             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-slv7p                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node default-k8s-diff-port-344803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node default-k8s-diff-port-344803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node default-k8s-diff-port-344803 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m   kubelet          Node default-k8s-diff-port-344803 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m   kubelet          Node default-k8s-diff-port-344803 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node default-k8s-diff-port-344803 event: Registered Node default-k8s-diff-port-344803 in Controller
	
	
	==> dmesg <==
	[Dec25 13:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071193] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.541549] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.651379] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.155750] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.510662] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.115514] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.187838] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.161562] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.179009] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.311726] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[Dec25 13:27] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[ +14.521579] kauditd_printk_skb: 19 callbacks suppressed
	[Dec25 13:31] systemd-fstab-generator[3520]: Ignoring "noauto" for root device
	[Dec25 13:32] systemd-fstab-generator[3844]: Ignoring "noauto" for root device
	[ +16.115239] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] <==
	{"level":"info","ts":"2023-12-25T13:31:55.923688Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.39:2380"}
	{"level":"info","ts":"2023-12-25T13:31:55.923821Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.39:2380"}
	{"level":"info","ts":"2023-12-25T13:31:55.927025Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-25T13:31:55.926956Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"17fc404ea26715a5","initial-advertise-peer-urls":["https://192.168.61.39:2380"],"listen-peer-urls":["https://192.168.61.39:2380"],"advertise-client-urls":["https://192.168.61.39:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.39:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-25T13:31:56.749408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"17fc404ea26715a5 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-25T13:31:56.749522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"17fc404ea26715a5 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-25T13:31:56.749551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"17fc404ea26715a5 received MsgPreVoteResp from 17fc404ea26715a5 at term 1"}
	{"level":"info","ts":"2023-12-25T13:31:56.749566Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"17fc404ea26715a5 became candidate at term 2"}
	{"level":"info","ts":"2023-12-25T13:31:56.749574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"17fc404ea26715a5 received MsgVoteResp from 17fc404ea26715a5 at term 2"}
	{"level":"info","ts":"2023-12-25T13:31:56.74959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"17fc404ea26715a5 became leader at term 2"}
	{"level":"info","ts":"2023-12-25T13:31:56.7496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 17fc404ea26715a5 elected leader 17fc404ea26715a5 at term 2"}
	{"level":"info","ts":"2023-12-25T13:31:56.750902Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"17fc404ea26715a5","local-member-attributes":"{Name:default-k8s-diff-port-344803 ClientURLs:[https://192.168.61.39:2379]}","request-path":"/0/members/17fc404ea26715a5/attributes","cluster-id":"9f6bc8fdeeeeca08","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-25T13:31:56.750959Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-25T13:31:56.751316Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-25T13:31:56.752051Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.39:2379"}
	{"level":"info","ts":"2023-12-25T13:31:56.752134Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-25T13:31:56.752489Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-25T13:31:56.752561Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-25T13:31:56.752619Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9f6bc8fdeeeeca08","local-member-id":"17fc404ea26715a5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-25T13:31:56.752712Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-25T13:31:56.752749Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-25T13:31:56.753036Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-25T13:41:56.790256Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2023-12-25T13:41:56.793644Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":677,"took":"2.537002ms","hash":3125984727}
	{"level":"info","ts":"2023-12-25T13:41:56.793769Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3125984727,"revision":677,"compact-revision":-1}
	
	
	==> kernel <==
	 13:45:58 up 19 min,  0 users,  load average: 0.02, 0.15, 0.21
	Linux default-k8s-diff-port-344803 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] <==
	W1225 13:41:59.384731       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:41:59.384787       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:41:59.384800       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:41:59.384852       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:41:59.384935       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:41:59.386217       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:42:58.263365       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1225 13:42:59.385243       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:42:59.385341       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:42:59.385402       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:42:59.386474       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:42:59.386580       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:42:59.386608       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:43:58.263955       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1225 13:44:58.263720       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1225 13:44:59.386360       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:44:59.386446       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:44:59.386457       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:44:59.387720       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:44:59.387847       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:44:59.387861       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:45:58.263859       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] <==
	I1225 13:40:15.308017       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:40:44.805846       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:40:45.319785       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:41:14.811877       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:41:15.330902       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:41:44.819364       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:41:45.344355       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:42:14.826282       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:42:15.355405       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:42:44.833578       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:42:45.364793       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:43:14.839640       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:43:15.375435       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1225 13:43:21.006370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="260.506µs"
	I1225 13:43:33.009868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="204.193µs"
	E1225 13:43:44.848018       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:43:45.385370       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:44:14.856017       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:44:15.394761       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:44:44.862617       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:44:45.404414       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:45:14.869017       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:45:15.415728       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:45:44.879682       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:45:45.425881       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] <==
	I1225 13:32:17.091391       1 server_others.go:69] "Using iptables proxy"
	I1225 13:32:17.160730       1 node.go:141] Successfully retrieved node IP: 192.168.61.39
	I1225 13:32:17.336101       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1225 13:32:17.336148       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1225 13:32:17.356414       1 server_others.go:152] "Using iptables Proxier"
	I1225 13:32:17.356687       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1225 13:32:17.357977       1 server.go:846] "Version info" version="v1.28.4"
	I1225 13:32:17.358068       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 13:32:17.368672       1 config.go:188] "Starting service config controller"
	I1225 13:32:17.370037       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1225 13:32:17.370229       1 config.go:315] "Starting node config controller"
	I1225 13:32:17.370263       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1225 13:32:17.370911       1 config.go:97] "Starting endpoint slice config controller"
	I1225 13:32:17.370943       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1225 13:32:17.522012       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1225 13:32:17.522123       1 shared_informer.go:318] Caches are synced for node config
	I1225 13:32:17.522134       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] <==
	W1225 13:31:58.453466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1225 13:31:58.453523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1225 13:31:58.453596       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1225 13:31:58.453891       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1225 13:31:58.453664       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1225 13:31:58.453945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1225 13:31:58.453714       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1225 13:31:58.453996       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1225 13:31:58.453760       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1225 13:31:58.455960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1225 13:31:59.346158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1225 13:31:59.346262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1225 13:31:59.353969       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1225 13:31:59.354076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1225 13:31:59.376741       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1225 13:31:59.376838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1225 13:31:59.399976       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1225 13:31:59.400093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1225 13:31:59.432044       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1225 13:31:59.432223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1225 13:31:59.665977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1225 13:31:59.666065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1225 13:31:59.750926       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1225 13:31:59.751032       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1225 13:32:02.122491       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2023-12-25 13:26:47 UTC, ends at Mon 2023-12-25 13:45:58 UTC. --
	Dec 25 13:43:09 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:43:09.000138    3851 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 25 13:43:09 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:43:09.000252    3851 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 25 13:43:09 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:43:09.000466    3851 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nfw56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-slv7p_kube-system(a51c534d-e6d8-48b9-852f-caf598c8853a): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 25 13:43:09 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:43:09.000503    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:43:20 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:43:20.985897    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:43:32 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:43:32.986537    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:43:46 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:43:46.986693    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:44:01 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:44:01.987333    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:44:02 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:44:02.078733    3851 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:44:02 default-k8s-diff-port-344803 kubelet[3851]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:44:02 default-k8s-diff-port-344803 kubelet[3851]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:44:02 default-k8s-diff-port-344803 kubelet[3851]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:44:12 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:44:12.986330    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:44:23 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:44:23.988899    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:44:38 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:44:38.986366    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:44:49 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:44:49.986408    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:45:01 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:45:01.987111    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:45:02 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:45:02.080448    3851 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:45:02 default-k8s-diff-port-344803 kubelet[3851]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:45:02 default-k8s-diff-port-344803 kubelet[3851]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:45:02 default-k8s-diff-port-344803 kubelet[3851]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:45:12 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:45:12.985681    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:45:25 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:45:25.986757    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:45:40 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:45:40.991588    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:45:51 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:45:51.992699    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	
	
	==> storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] <==
	I1225 13:32:18.522452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 13:32:18.542368       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 13:32:18.543402       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 13:32:18.596645       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 13:32:18.596884       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-344803_3cc37642-73cd-4599-8ab9-70d46378544a!
	I1225 13:32:18.621502       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a2abd788-7c74-4c41-8745-bad346f1dad2", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-344803_3cc37642-73cd-4599-8ab9-70d46378544a became leader
	I1225 13:32:18.698080       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-344803_3cc37642-73cd-4599-8ab9-70d46378544a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-344803 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-slv7p
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-344803 describe pod metrics-server-57f55c9bc5-slv7p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-344803 describe pod metrics-server-57f55c9bc5-slv7p: exit status 1 (78.159587ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-slv7p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-344803 describe pod metrics-server-57f55c9bc5-slv7p: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (531.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1225 13:38:56.706710 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 13:39:07.347559 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-198979 -n old-k8s-version-198979
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-25 13:45:53.667517334 +0000 UTC m=+5378.285994356
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-198979 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-198979 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.27µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-198979 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-198979 -n old-k8s-version-198979
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-198979 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-198979 logs -n 25: (1.693352038s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-435411                           | kubernetes-upgrade-435411    | jenkins | v1.32.0 | 25 Dec 23 13:17 UTC | 25 Dec 23 13:17 UTC |
	| start   | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:17 UTC | 25 Dec 23 13:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-021022                              | cert-expiration-021022       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC | 25 Dec 23 13:19 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-198979        | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC | 25 Dec 23 13:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-330063             | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-021022                              | cert-expiration-021022       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	| delete  | -p                                                     | disable-driver-mounts-246503 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	|         | disable-driver-mounts-246503                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:22 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-198979             | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-330063                  | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880612            | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-344803  | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880612                 | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-344803       | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC | 25 Dec 23 13:36 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 13:25:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 13:25:09.868120 1484104 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:25:09.868323 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:25:09.868335 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:25:09.868341 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:25:09.868532 1484104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:25:09.869122 1484104 out.go:303] Setting JSON to false
	I1225 13:25:09.870130 1484104 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":158863,"bootTime":1703351847,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 13:25:09.870205 1484104 start.go:138] virtualization: kvm guest
	I1225 13:25:09.872541 1484104 out.go:177] * [default-k8s-diff-port-344803] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 13:25:09.874217 1484104 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 13:25:09.874305 1484104 notify.go:220] Checking for updates...
	I1225 13:25:09.875839 1484104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 13:25:09.877587 1484104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:25:09.879065 1484104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:25:09.880503 1484104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 13:25:09.881819 1484104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 13:25:09.883607 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:25:09.884026 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:09.884110 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:09.899270 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I1225 13:25:09.899708 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:09.900286 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:25:09.900337 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:09.900687 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:09.900912 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:25:09.901190 1484104 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 13:25:09.901525 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:09.901579 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:09.916694 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39937
	I1225 13:25:09.917130 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:09.917673 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:25:09.917704 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:09.918085 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:09.918333 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:25:09.953536 1484104 out.go:177] * Using the kvm2 driver based on existing profile
	I1225 13:25:09.955050 1484104 start.go:298] selected driver: kvm2
	I1225 13:25:09.955065 1484104 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:25:09.955241 1484104 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 13:25:09.955956 1484104 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:25:09.956047 1484104 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 13:25:09.971769 1484104 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 13:25:09.972199 1484104 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 13:25:09.972296 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:25:09.972313 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:25:09.972334 1484104 start_flags.go:323] config:
	{Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-34480
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:25:09.972534 1484104 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:25:09.975411 1484104 out.go:177] * Starting control plane node default-k8s-diff-port-344803 in cluster default-k8s-diff-port-344803
	I1225 13:25:07.694690 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:09.976744 1484104 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:25:09.976814 1484104 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1225 13:25:09.976830 1484104 cache.go:56] Caching tarball of preloaded images
	I1225 13:25:09.976928 1484104 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 13:25:09.976941 1484104 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1225 13:25:09.977353 1484104 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/config.json ...
	I1225 13:25:09.977710 1484104 start.go:365] acquiring machines lock for default-k8s-diff-port-344803: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:25:10.766734 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:16.850681 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:19.922690 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:25.998796 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:29.070780 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:35.150661 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:38.222822 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:44.302734 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:50.379073 1483118 start.go:369] acquired machines lock for "no-preload-330063" in 3m45.211894916s
	I1225 13:25:50.379143 1483118 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:25:50.379155 1483118 fix.go:54] fixHost starting: 
	I1225 13:25:50.379692 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:25:50.379739 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:25:50.395491 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I1225 13:25:50.395953 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:25:50.396490 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:25:50.396512 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:25:50.396880 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:25:50.397080 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:25:50.397224 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:25:50.399083 1483118 fix.go:102] recreateIfNeeded on no-preload-330063: state=Stopped err=<nil>
	I1225 13:25:50.399110 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	W1225 13:25:50.399283 1483118 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:25:50.401483 1483118 out.go:177] * Restarting existing kvm2 VM for "no-preload-330063" ...
	I1225 13:25:47.374782 1482618 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.186:22: connect: no route to host
	I1225 13:25:50.376505 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:25:50.376562 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:25:50.378895 1482618 machine.go:91] provisioned docker machine in 4m37.578359235s
	I1225 13:25:50.378958 1482618 fix.go:56] fixHost completed within 4m37.60680956s
	I1225 13:25:50.378968 1482618 start.go:83] releasing machines lock for "old-k8s-version-198979", held for 4m37.606859437s
	W1225 13:25:50.378992 1482618 start.go:694] error starting host: provision: host is not running
	W1225 13:25:50.379100 1482618 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1225 13:25:50.379111 1482618 start.go:709] Will try again in 5 seconds ...
	I1225 13:25:50.403280 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Start
	I1225 13:25:50.403507 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring networks are active...
	I1225 13:25:50.404422 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring network default is active
	I1225 13:25:50.404784 1483118 main.go:141] libmachine: (no-preload-330063) Ensuring network mk-no-preload-330063 is active
	I1225 13:25:50.405087 1483118 main.go:141] libmachine: (no-preload-330063) Getting domain xml...
	I1225 13:25:50.405654 1483118 main.go:141] libmachine: (no-preload-330063) Creating domain...
	I1225 13:25:51.676192 1483118 main.go:141] libmachine: (no-preload-330063) Waiting to get IP...
	I1225 13:25:51.677110 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:51.677638 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:51.677715 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:51.677616 1484268 retry.go:31] will retry after 268.018359ms: waiting for machine to come up
	I1225 13:25:51.947683 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:51.948172 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:51.948198 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:51.948118 1484268 retry.go:31] will retry after 278.681465ms: waiting for machine to come up
	I1225 13:25:52.228745 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.229234 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.229265 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.229166 1484268 retry.go:31] will retry after 329.72609ms: waiting for machine to come up
	I1225 13:25:52.560878 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.561315 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.561348 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.561257 1484268 retry.go:31] will retry after 398.659264ms: waiting for machine to come up
	I1225 13:25:52.962067 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:52.962596 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:52.962620 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:52.962548 1484268 retry.go:31] will retry after 474.736894ms: waiting for machine to come up
	I1225 13:25:53.439369 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:53.439834 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:53.439856 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:53.439795 1484268 retry.go:31] will retry after 632.915199ms: waiting for machine to come up
	I1225 13:25:54.074832 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:54.075320 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:54.075349 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:54.075286 1484268 retry.go:31] will retry after 889.605242ms: waiting for machine to come up
	I1225 13:25:54.966323 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:54.966800 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:54.966822 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:54.966757 1484268 retry.go:31] will retry after 1.322678644s: waiting for machine to come up
	I1225 13:25:55.379741 1482618 start.go:365] acquiring machines lock for old-k8s-version-198979: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:25:56.291182 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:56.291604 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:56.291633 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:56.291567 1484268 retry.go:31] will retry after 1.717647471s: waiting for machine to come up
	I1225 13:25:58.011626 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:25:58.012081 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:25:58.012116 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:25:58.012018 1484268 retry.go:31] will retry after 2.29935858s: waiting for machine to come up
	I1225 13:26:00.314446 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:00.314833 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:00.314858 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:00.314806 1484268 retry.go:31] will retry after 2.50206405s: waiting for machine to come up
	I1225 13:26:02.819965 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:02.820458 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:02.820490 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:02.820403 1484268 retry.go:31] will retry after 2.332185519s: waiting for machine to come up
	I1225 13:26:05.155725 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:05.156228 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:05.156263 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:05.156153 1484268 retry.go:31] will retry after 2.769754662s: waiting for machine to come up
	I1225 13:26:07.929629 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:07.930087 1483118 main.go:141] libmachine: (no-preload-330063) DBG | unable to find current IP address of domain no-preload-330063 in network mk-no-preload-330063
	I1225 13:26:07.930126 1483118 main.go:141] libmachine: (no-preload-330063) DBG | I1225 13:26:07.930040 1484268 retry.go:31] will retry after 4.407133766s: waiting for machine to come up
	I1225 13:26:13.687348 1483946 start.go:369] acquired machines lock for "embed-certs-880612" in 1m27.002513209s
	I1225 13:26:13.687426 1483946 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:13.687436 1483946 fix.go:54] fixHost starting: 
	I1225 13:26:13.687850 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:13.687916 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:13.706054 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I1225 13:26:13.706521 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:13.707063 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:26:13.707087 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:13.707472 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:13.707645 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:13.707832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:26:13.709643 1483946 fix.go:102] recreateIfNeeded on embed-certs-880612: state=Stopped err=<nil>
	I1225 13:26:13.709676 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	W1225 13:26:13.709868 1483946 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:13.712452 1483946 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880612" ...
	I1225 13:26:12.339674 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.340219 1483118 main.go:141] libmachine: (no-preload-330063) Found IP for machine: 192.168.72.232
	I1225 13:26:12.340243 1483118 main.go:141] libmachine: (no-preload-330063) Reserving static IP address...
	I1225 13:26:12.340263 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has current primary IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.340846 1483118 main.go:141] libmachine: (no-preload-330063) Reserved static IP address: 192.168.72.232
	I1225 13:26:12.340896 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "no-preload-330063", mac: "52:54:00:e9:c3:b6", ip: "192.168.72.232"} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.340912 1483118 main.go:141] libmachine: (no-preload-330063) Waiting for SSH to be available...
	I1225 13:26:12.340947 1483118 main.go:141] libmachine: (no-preload-330063) DBG | skip adding static IP to network mk-no-preload-330063 - found existing host DHCP lease matching {name: "no-preload-330063", mac: "52:54:00:e9:c3:b6", ip: "192.168.72.232"}
	I1225 13:26:12.340962 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Getting to WaitForSSH function...
	I1225 13:26:12.343164 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.343417 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.343448 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.343552 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Using SSH client type: external
	I1225 13:26:12.343566 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa (-rw-------)
	I1225 13:26:12.343587 1483118 main.go:141] libmachine: (no-preload-330063) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:12.343595 1483118 main.go:141] libmachine: (no-preload-330063) DBG | About to run SSH command:
	I1225 13:26:12.343603 1483118 main.go:141] libmachine: (no-preload-330063) DBG | exit 0
	I1225 13:26:12.434661 1483118 main.go:141] libmachine: (no-preload-330063) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:12.435101 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetConfigRaw
	I1225 13:26:12.435827 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:12.438300 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.438673 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.438705 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.438870 1483118 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/config.json ...
	I1225 13:26:12.439074 1483118 machine.go:88] provisioning docker machine ...
	I1225 13:26:12.439093 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:12.439335 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.439556 1483118 buildroot.go:166] provisioning hostname "no-preload-330063"
	I1225 13:26:12.439584 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.439789 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.442273 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.442630 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.442661 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.442768 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.442956 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.443127 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.443271 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.443410 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:12.443772 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:12.443787 1483118 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-330063 && echo "no-preload-330063" | sudo tee /etc/hostname
	I1225 13:26:12.581579 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-330063
	
	I1225 13:26:12.581609 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.584621 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.584949 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.584979 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.585252 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.585495 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.585656 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.585790 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.585947 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:12.586320 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:12.586346 1483118 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-330063' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-330063/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-330063' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:12.717139 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:12.717176 1483118 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:12.717197 1483118 buildroot.go:174] setting up certificates
	I1225 13:26:12.717212 1483118 provision.go:83] configureAuth start
	I1225 13:26:12.717229 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetMachineName
	I1225 13:26:12.717570 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:12.720469 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.720828 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.720859 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.721016 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.723432 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.723758 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.723815 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.723944 1483118 provision.go:138] copyHostCerts
	I1225 13:26:12.724021 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:12.724035 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:12.724102 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:12.724207 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:12.724215 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:12.724242 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:12.724323 1483118 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:12.724330 1483118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:12.724351 1483118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:12.724408 1483118 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.no-preload-330063 san=[192.168.72.232 192.168.72.232 localhost 127.0.0.1 minikube no-preload-330063]
	I1225 13:26:12.929593 1483118 provision.go:172] copyRemoteCerts
	I1225 13:26:12.929665 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:12.929699 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:12.932608 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.932934 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:12.932978 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:12.933144 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:12.933389 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:12.933581 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:12.933738 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.023574 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:13.047157 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1225 13:26:13.070779 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:26:13.094249 1483118 provision.go:86] duration metric: configureAuth took 377.018818ms
	I1225 13:26:13.094284 1483118 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:13.094538 1483118 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:26:13.094665 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.097705 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.098133 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.098179 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.098429 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.098708 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.098888 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.099029 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.099191 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:13.099516 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:13.099534 1483118 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:13.430084 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:13.430138 1483118 machine.go:91] provisioned docker machine in 991.050011ms
	I1225 13:26:13.430150 1483118 start.go:300] post-start starting for "no-preload-330063" (driver="kvm2")
	I1225 13:26:13.430162 1483118 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:13.430185 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.430616 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:13.430661 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.433623 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.434018 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.434054 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.434191 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.434413 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.434586 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.434700 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.523954 1483118 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:13.528009 1483118 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:13.528040 1483118 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:13.528118 1483118 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:13.528214 1483118 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:13.528359 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:13.536826 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:13.561011 1483118 start.go:303] post-start completed in 130.840608ms
	I1225 13:26:13.561046 1483118 fix.go:56] fixHost completed within 23.181891118s
	I1225 13:26:13.561078 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.563717 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.564040 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.564087 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.564268 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.564504 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.564702 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.564812 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.564965 1483118 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:13.565326 1483118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.72.232 22 <nil> <nil>}
	I1225 13:26:13.565340 1483118 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:13.687155 1483118 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510773.671808211
	
	I1225 13:26:13.687181 1483118 fix.go:206] guest clock: 1703510773.671808211
	I1225 13:26:13.687189 1483118 fix.go:219] Guest: 2023-12-25 13:26:13.671808211 +0000 UTC Remote: 2023-12-25 13:26:13.561052282 +0000 UTC m=+248.574935292 (delta=110.755929ms)
	I1225 13:26:13.687209 1483118 fix.go:190] guest clock delta is within tolerance: 110.755929ms
	I1225 13:26:13.687214 1483118 start.go:83] releasing machines lock for "no-preload-330063", held for 23.308100249s
	I1225 13:26:13.687243 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.687561 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:13.690172 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.690572 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.690604 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.690780 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691362 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691534 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:13.691615 1483118 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:13.691670 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.691807 1483118 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:13.691842 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:13.694593 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.694871 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.694943 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.694967 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.695202 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.695293 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:13.695319 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:13.695452 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.695508 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:13.695613 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.695725 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:13.695813 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.695899 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:13.696068 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:13.812135 1483118 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:13.817944 1483118 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:13.965641 1483118 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:13.973263 1483118 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:13.973433 1483118 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:13.991077 1483118 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:13.991112 1483118 start.go:475] detecting cgroup driver to use...
	I1225 13:26:13.991197 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:14.005649 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:14.018464 1483118 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:14.018540 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:14.031361 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:14.046011 1483118 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:14.152826 1483118 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:14.281488 1483118 docker.go:219] disabling docker service ...
	I1225 13:26:14.281577 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:14.297584 1483118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:14.311896 1483118 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:14.448141 1483118 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:14.583111 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:14.599419 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:14.619831 1483118 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:14.619909 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.631979 1483118 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:14.632065 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.643119 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.655441 1483118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:14.666525 1483118 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:14.678080 1483118 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:14.687889 1483118 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:14.687957 1483118 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:14.702290 1483118 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:14.712225 1483118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:14.836207 1483118 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:15.019332 1483118 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:15.019424 1483118 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:15.024755 1483118 start.go:543] Will wait 60s for crictl version
	I1225 13:26:15.024844 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.028652 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:15.074415 1483118 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:15.074550 1483118 ssh_runner.go:195] Run: crio --version
	I1225 13:26:15.128559 1483118 ssh_runner.go:195] Run: crio --version
	I1225 13:26:15.178477 1483118 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1225 13:26:13.714488 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Start
	I1225 13:26:13.714708 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring networks are active...
	I1225 13:26:13.715513 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring network default is active
	I1225 13:26:13.715868 1483946 main.go:141] libmachine: (embed-certs-880612) Ensuring network mk-embed-certs-880612 is active
	I1225 13:26:13.716279 1483946 main.go:141] libmachine: (embed-certs-880612) Getting domain xml...
	I1225 13:26:13.716905 1483946 main.go:141] libmachine: (embed-certs-880612) Creating domain...
	I1225 13:26:15.049817 1483946 main.go:141] libmachine: (embed-certs-880612) Waiting to get IP...
	I1225 13:26:15.051040 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.051641 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.051756 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.051615 1484395 retry.go:31] will retry after 199.911042ms: waiting for machine to come up
	I1225 13:26:15.253158 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.260582 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.260620 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.260519 1484395 retry.go:31] will retry after 285.022636ms: waiting for machine to come up
	I1225 13:26:15.547290 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.547756 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.547787 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.547692 1484395 retry.go:31] will retry after 327.637369ms: waiting for machine to come up
	I1225 13:26:15.877618 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:15.878119 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:15.878153 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:15.878058 1484395 retry.go:31] will retry after 384.668489ms: waiting for machine to come up
	I1225 13:26:16.264592 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:16.265056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:16.265084 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:16.265005 1484395 retry.go:31] will retry after 468.984683ms: waiting for machine to come up
	I1225 13:26:15.180205 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetIP
	I1225 13:26:15.183372 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:15.183820 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:15.183862 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:15.184054 1483118 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:15.188935 1483118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:15.202790 1483118 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1225 13:26:15.202839 1483118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:15.245267 1483118 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1225 13:26:15.245297 1483118 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1225 13:26:15.245409 1483118 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.245430 1483118 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.245448 1483118 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.245467 1483118 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.245468 1483118 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1225 13:26:15.245534 1483118 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.245447 1483118 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.245404 1483118 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.247839 1483118 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.247850 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.247874 1483118 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.247911 1483118 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.247980 1483118 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1225 13:26:15.247984 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.248043 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.248281 1483118 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.404332 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.405729 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.407712 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1225 13:26:15.412419 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.413201 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.413349 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.453117 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.533541 1483118 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.536843 1483118 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1225 13:26:15.536896 1483118 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.536950 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.576965 1483118 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1225 13:26:15.577010 1483118 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.577078 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688643 1483118 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1225 13:26:15.688696 1483118 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.688710 1483118 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1225 13:26:15.688750 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688759 1483118 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.688765 1483118 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1225 13:26:15.688794 1483118 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.688813 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688835 1483118 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1225 13:26:15.688847 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688858 1483118 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.688869 1483118 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1225 13:26:15.688890 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688896 1483118 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.688910 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1225 13:26:15.688921 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:26:15.688949 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1225 13:26:15.706288 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1225 13:26:15.779043 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1225 13:26:15.779170 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1225 13:26:15.779219 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1225 13:26:15.779219 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1225 13:26:15.779181 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.779297 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:15.779309 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:15.779274 1483118 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1225 13:26:15.779439 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:15.779507 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:15.864891 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:15.865017 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:15.884972 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1225 13:26:15.885024 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:15.885035 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1225 13:26:15.885045 1483118 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.885091 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1225 13:26:15.885094 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:15.885109 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:15.885146 1483118 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1225 13:26:15.885167 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1225 13:26:15.885229 1483118 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:15.885239 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1225 13:26:15.885273 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1225 13:26:15.898753 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1225 13:26:17.966777 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.08165399s)
	I1225 13:26:17.966822 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1225 13:26:17.966836 1483118 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.081714527s)
	I1225 13:26:17.966865 1483118 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.081735795s)
	I1225 13:26:17.966848 1483118 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:17.966894 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1225 13:26:17.966874 1483118 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1225 13:26:17.966936 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1225 13:26:16.736013 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:16.736519 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:16.736553 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:16.736449 1484395 retry.go:31] will retry after 873.004128ms: waiting for machine to come up
	I1225 13:26:17.611675 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:17.612135 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:17.612160 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:17.612085 1484395 retry.go:31] will retry after 1.093577821s: waiting for machine to come up
	I1225 13:26:18.707411 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:18.707936 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:18.707994 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:18.707904 1484395 retry.go:31] will retry after 1.364130049s: waiting for machine to come up
	I1225 13:26:20.074559 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:20.075102 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:20.075135 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:20.075033 1484395 retry.go:31] will retry after 1.740290763s: waiting for machine to come up
	I1225 13:26:21.677915 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.710943608s)
	I1225 13:26:21.677958 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1225 13:26:21.677990 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:21.678050 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1225 13:26:23.630977 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.952875837s)
	I1225 13:26:23.631018 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1225 13:26:23.631051 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:23.631112 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1225 13:26:21.818166 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:21.818695 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:21.818728 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:21.818641 1484395 retry.go:31] will retry after 2.035498479s: waiting for machine to come up
	I1225 13:26:23.856368 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:23.857094 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:23.857120 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:23.856997 1484395 retry.go:31] will retry after 2.331127519s: waiting for machine to come up
	I1225 13:26:26.191862 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:26.192571 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:26.192608 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:26.192513 1484395 retry.go:31] will retry after 3.191632717s: waiting for machine to come up
	I1225 13:26:26.193816 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.56267278s)
	I1225 13:26:26.193849 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1225 13:26:26.193884 1483118 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:26.193951 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1225 13:26:27.342879 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.148892619s)
	I1225 13:26:27.342913 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1225 13:26:27.342948 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:27.343014 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1225 13:26:29.909035 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.565991605s)
	I1225 13:26:29.909080 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1225 13:26:29.909105 1483118 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:29.909159 1483118 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1225 13:26:29.386007 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:29.386335 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | unable to find current IP address of domain embed-certs-880612 in network mk-embed-certs-880612
	I1225 13:26:29.386366 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | I1225 13:26:29.386294 1484395 retry.go:31] will retry after 3.786228584s: waiting for machine to come up
	I1225 13:26:34.439583 1484104 start.go:369] acquired machines lock for "default-k8s-diff-port-344803" in 1m24.461830001s
	I1225 13:26:34.439666 1484104 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:34.439686 1484104 fix.go:54] fixHost starting: 
	I1225 13:26:34.440164 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:34.440230 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:34.457403 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46037
	I1225 13:26:34.457867 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:34.458351 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:26:34.458422 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:34.458748 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:34.458989 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:34.459176 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:26:34.460975 1484104 fix.go:102] recreateIfNeeded on default-k8s-diff-port-344803: state=Stopped err=<nil>
	I1225 13:26:34.461008 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	W1225 13:26:34.461188 1484104 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:34.463715 1484104 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-344803" ...
	I1225 13:26:34.465022 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Start
	I1225 13:26:34.465274 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring networks are active...
	I1225 13:26:34.466182 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring network default is active
	I1225 13:26:34.466565 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Ensuring network mk-default-k8s-diff-port-344803 is active
	I1225 13:26:34.466922 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Getting domain xml...
	I1225 13:26:34.467691 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Creating domain...
	I1225 13:26:32.065345 1483118 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.15614946s)
	I1225 13:26:32.065380 1483118 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1225 13:26:32.065414 1483118 cache_images.go:123] Successfully loaded all cached images
	I1225 13:26:32.065421 1483118 cache_images.go:92] LoadImages completed in 16.820112197s
	I1225 13:26:32.065498 1483118 ssh_runner.go:195] Run: crio config
	I1225 13:26:32.120989 1483118 cni.go:84] Creating CNI manager for ""
	I1225 13:26:32.121019 1483118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:32.121045 1483118 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:26:32.121063 1483118 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.232 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-330063 NodeName:no-preload-330063 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:26:32.121216 1483118 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-330063"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:26:32.121297 1483118 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-330063 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-330063 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:26:32.121357 1483118 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1225 13:26:32.132569 1483118 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:26:32.132677 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:26:32.142052 1483118 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1225 13:26:32.158590 1483118 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1225 13:26:32.174761 1483118 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1225 13:26:32.191518 1483118 ssh_runner.go:195] Run: grep 192.168.72.232	control-plane.minikube.internal$ /etc/hosts
	I1225 13:26:32.195353 1483118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:32.206845 1483118 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063 for IP: 192.168.72.232
	I1225 13:26:32.206879 1483118 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:32.207098 1483118 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:26:32.207145 1483118 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:26:32.207212 1483118 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.key
	I1225 13:26:32.207270 1483118 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.key.4e9d87c6
	I1225 13:26:32.207323 1483118 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.key
	I1225 13:26:32.207437 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:26:32.207465 1483118 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:26:32.207475 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:26:32.207513 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:26:32.207539 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:26:32.207565 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:26:32.207607 1483118 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:32.208427 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:26:32.231142 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:26:32.253335 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:26:32.275165 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:26:32.297762 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:26:32.320671 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:26:32.344125 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:26:32.368066 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:26:32.390688 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:26:32.412849 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:26:32.435445 1483118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:26:32.457687 1483118 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:26:32.474494 1483118 ssh_runner.go:195] Run: openssl version
	I1225 13:26:32.480146 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:26:32.491141 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.495831 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.495902 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:26:32.501393 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:26:32.511643 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:26:32.521843 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.526421 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.526514 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:32.531988 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:26:32.542920 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:26:32.553604 1483118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.558381 1483118 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.558478 1483118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:26:32.563913 1483118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:26:32.574591 1483118 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:26:32.579046 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:26:32.584821 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:26:32.590781 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:26:32.596456 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:26:32.601978 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:26:32.607981 1483118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:26:32.613785 1483118 kubeadm.go:404] StartCluster: {Name:no-preload-330063 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-330063 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:26:32.613897 1483118 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:26:32.613955 1483118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:32.651782 1483118 cri.go:89] found id: ""
	I1225 13:26:32.651858 1483118 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:26:32.664617 1483118 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:26:32.664648 1483118 kubeadm.go:636] restartCluster start
	I1225 13:26:32.664710 1483118 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:26:32.674727 1483118 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:32.676090 1483118 kubeconfig.go:92] found "no-preload-330063" server: "https://192.168.72.232:8443"
	I1225 13:26:32.679085 1483118 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:26:32.689716 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:32.689824 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:32.702305 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.189843 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:33.189955 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:33.202514 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.689935 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:33.690048 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:33.703975 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:34.190601 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:34.190722 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:34.203987 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:34.690505 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:34.690639 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:34.701704 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:33.173890 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.174349 1483946 main.go:141] libmachine: (embed-certs-880612) Found IP for machine: 192.168.50.179
	I1225 13:26:33.174372 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has current primary IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.174405 1483946 main.go:141] libmachine: (embed-certs-880612) Reserving static IP address...
	I1225 13:26:33.174805 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "embed-certs-880612", mac: "52:54:00:a2:ab:67", ip: "192.168.50.179"} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.174845 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | skip adding static IP to network mk-embed-certs-880612 - found existing host DHCP lease matching {name: "embed-certs-880612", mac: "52:54:00:a2:ab:67", ip: "192.168.50.179"}
	I1225 13:26:33.174860 1483946 main.go:141] libmachine: (embed-certs-880612) Reserved static IP address: 192.168.50.179
	I1225 13:26:33.174877 1483946 main.go:141] libmachine: (embed-certs-880612) Waiting for SSH to be available...
	I1225 13:26:33.174892 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Getting to WaitForSSH function...
	I1225 13:26:33.177207 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.177579 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.177609 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.177711 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Using SSH client type: external
	I1225 13:26:33.177737 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa (-rw-------)
	I1225 13:26:33.177777 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:33.177790 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | About to run SSH command:
	I1225 13:26:33.177803 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | exit 0
	I1225 13:26:33.274328 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:33.274736 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetConfigRaw
	I1225 13:26:33.275462 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:33.278056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.278429 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.278483 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.278725 1483946 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/config.json ...
	I1225 13:26:33.278982 1483946 machine.go:88] provisioning docker machine ...
	I1225 13:26:33.279013 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:33.279236 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.279448 1483946 buildroot.go:166] provisioning hostname "embed-certs-880612"
	I1225 13:26:33.279468 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.279619 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.281930 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.282277 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.282311 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.282474 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.282704 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.282885 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.283033 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.283194 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.283700 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.283723 1483946 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880612 && echo "embed-certs-880612" | sudo tee /etc/hostname
	I1225 13:26:33.433456 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880612
	
	I1225 13:26:33.433483 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.436392 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.436794 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.436840 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.437004 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.437233 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.437446 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.437595 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.437783 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.438112 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.438134 1483946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880612/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:33.579776 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:33.579813 1483946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:33.579845 1483946 buildroot.go:174] setting up certificates
	I1225 13:26:33.579859 1483946 provision.go:83] configureAuth start
	I1225 13:26:33.579874 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetMachineName
	I1225 13:26:33.580151 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:33.582843 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.583233 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.583266 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.583461 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.585844 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.586216 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.586253 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.586454 1483946 provision.go:138] copyHostCerts
	I1225 13:26:33.586532 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:33.586548 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:33.586604 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:33.586692 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:33.586704 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:33.586723 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:33.586771 1483946 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:33.586778 1483946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:33.586797 1483946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:33.586837 1483946 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880612 san=[192.168.50.179 192.168.50.179 localhost 127.0.0.1 minikube embed-certs-880612]
	I1225 13:26:33.640840 1483946 provision.go:172] copyRemoteCerts
	I1225 13:26:33.640921 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:33.640951 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.643970 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.644390 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.644419 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.644606 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.644877 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.645065 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.645204 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:33.744907 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:33.769061 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1225 13:26:33.792125 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:26:33.816115 1483946 provision.go:86] duration metric: configureAuth took 236.215977ms
	I1225 13:26:33.816159 1483946 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:33.816373 1483946 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:26:33.816497 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:33.819654 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.820075 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:33.820108 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:33.820287 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:33.820519 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.820738 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:33.820873 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:33.821068 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:33.821403 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:33.821428 1483946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:34.159844 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:34.159882 1483946 machine.go:91] provisioned docker machine in 880.882549ms
	I1225 13:26:34.159897 1483946 start.go:300] post-start starting for "embed-certs-880612" (driver="kvm2")
	I1225 13:26:34.159934 1483946 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:34.159964 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.160327 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:34.160358 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.163009 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.163367 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.163400 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.163600 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.163801 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.163943 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.164093 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.261072 1483946 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:34.265655 1483946 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:34.265686 1483946 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:34.265777 1483946 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:34.265881 1483946 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:34.265996 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:34.276013 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:34.299731 1483946 start.go:303] post-start completed in 139.812994ms
	I1225 13:26:34.299783 1483946 fix.go:56] fixHost completed within 20.612345515s
	I1225 13:26:34.299813 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.302711 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.303189 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.303229 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.303363 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.303617 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.303856 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.304000 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.304198 1483946 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:34.304522 1483946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I1225 13:26:34.304535 1483946 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:34.439399 1483946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510794.384723199
	
	I1225 13:26:34.439426 1483946 fix.go:206] guest clock: 1703510794.384723199
	I1225 13:26:34.439433 1483946 fix.go:219] Guest: 2023-12-25 13:26:34.384723199 +0000 UTC Remote: 2023-12-25 13:26:34.29978875 +0000 UTC m=+107.780041384 (delta=84.934449ms)
	I1225 13:26:34.439468 1483946 fix.go:190] guest clock delta is within tolerance: 84.934449ms
	I1225 13:26:34.439475 1483946 start.go:83] releasing machines lock for "embed-certs-880612", held for 20.75208465s
	I1225 13:26:34.439518 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.439832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:34.442677 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.443002 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.443031 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.443219 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.443827 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.444029 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:26:34.444168 1483946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:34.444225 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.444259 1483946 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:34.444295 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:26:34.447106 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447136 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447497 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.447533 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:34.447553 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447571 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:34.447677 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.447719 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:26:34.447860 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.447904 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:26:34.447982 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.448094 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:26:34.448170 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.448219 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:26:34.572590 1483946 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:34.578648 1483946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:34.723874 1483946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:34.731423 1483946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:34.731495 1483946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:34.752447 1483946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:34.752478 1483946 start.go:475] detecting cgroup driver to use...
	I1225 13:26:34.752539 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:34.766782 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:34.781457 1483946 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:34.781548 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:34.798097 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:34.813743 1483946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:34.936843 1483946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:35.053397 1483946 docker.go:219] disabling docker service ...
	I1225 13:26:35.053478 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:35.067702 1483946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:35.079670 1483946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:35.213241 1483946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:35.346105 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:35.359207 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:35.377259 1483946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:35.377347 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.388026 1483946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:35.388116 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.398180 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.411736 1483946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:35.425888 1483946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:35.436586 1483946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:35.446969 1483946 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:35.447028 1483946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:35.461401 1483946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:35.471896 1483946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:35.619404 1483946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:35.825331 1483946 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:35.825410 1483946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:35.830699 1483946 start.go:543] Will wait 60s for crictl version
	I1225 13:26:35.830779 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:26:35.834938 1483946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:35.874595 1483946 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:35.874717 1483946 ssh_runner.go:195] Run: crio --version
	I1225 13:26:35.924227 1483946 ssh_runner.go:195] Run: crio --version
	I1225 13:26:35.982707 1483946 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 13:26:35.984401 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetIP
	I1225 13:26:35.987241 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:35.987669 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:26:35.987708 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:26:35.987991 1483946 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:35.992383 1483946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:36.004918 1483946 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:26:36.005000 1483946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:36.053783 1483946 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 13:26:36.053887 1483946 ssh_runner.go:195] Run: which lz4
	I1225 13:26:36.058040 1483946 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:26:36.062730 1483946 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:26:36.062785 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 13:26:35.824151 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting to get IP...
	I1225 13:26:35.825061 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:35.825643 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:35.825741 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:35.825605 1484550 retry.go:31] will retry after 292.143168ms: waiting for machine to come up
	I1225 13:26:36.119220 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.119741 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.119787 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.119666 1484550 retry.go:31] will retry after 250.340048ms: waiting for machine to come up
	I1225 13:26:36.372343 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.372894 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.372932 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.372840 1484550 retry.go:31] will retry after 434.335692ms: waiting for machine to come up
	I1225 13:26:36.808477 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.809037 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:36.809070 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:36.808999 1484550 retry.go:31] will retry after 455.184367ms: waiting for machine to come up
	I1225 13:26:37.265791 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.266330 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.266364 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:37.266278 1484550 retry.go:31] will retry after 487.994897ms: waiting for machine to come up
	I1225 13:26:37.756220 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.756745 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:37.756774 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:37.756699 1484550 retry.go:31] will retry after 817.108831ms: waiting for machine to come up
	I1225 13:26:38.575846 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:38.576271 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:38.576301 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:38.576222 1484550 retry.go:31] will retry after 1.022104679s: waiting for machine to come up
	I1225 13:26:39.600386 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:39.600863 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:39.600901 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:39.600796 1484550 retry.go:31] will retry after 1.318332419s: waiting for machine to come up
	I1225 13:26:35.190721 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:35.190828 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:35.203971 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:35.689934 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:35.690032 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:35.701978 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:36.190256 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:36.190355 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:36.204476 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:36.689969 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:36.690062 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:36.706632 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.189808 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:37.189921 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:37.203895 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.690391 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:37.690499 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:37.704914 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:38.190575 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:38.190694 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:38.208546 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:38.690090 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:38.690260 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:38.701827 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:39.190421 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:39.190549 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:39.202377 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:39.689978 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:39.690104 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:39.706511 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:37.963805 1483946 crio.go:444] Took 1.905809 seconds to copy over tarball
	I1225 13:26:37.963892 1483946 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:26:40.988182 1483946 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.024256156s)
	I1225 13:26:40.988214 1483946 crio.go:451] Took 3.024377 seconds to extract the tarball
	I1225 13:26:40.988225 1483946 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:26:41.030256 1483946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:41.085117 1483946 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 13:26:41.085147 1483946 cache_images.go:84] Images are preloaded, skipping loading
	I1225 13:26:41.085236 1483946 ssh_runner.go:195] Run: crio config
	I1225 13:26:41.149962 1483946 cni.go:84] Creating CNI manager for ""
	I1225 13:26:41.149993 1483946 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:41.150020 1483946 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:26:41.150044 1483946 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.179 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880612 NodeName:embed-certs-880612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:26:41.150237 1483946 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880612"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:26:41.150312 1483946 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-880612 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-880612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:26:41.150367 1483946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 13:26:41.160557 1483946 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:26:41.160681 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:26:41.170564 1483946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1225 13:26:41.187315 1483946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:26:41.204638 1483946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1225 13:26:41.222789 1483946 ssh_runner.go:195] Run: grep 192.168.50.179	control-plane.minikube.internal$ /etc/hosts
	I1225 13:26:41.226604 1483946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:41.238315 1483946 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612 for IP: 192.168.50.179
	I1225 13:26:41.238363 1483946 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:41.238614 1483946 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:26:41.238665 1483946 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:26:41.238768 1483946 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/client.key
	I1225 13:26:41.238860 1483946 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.key.518daada
	I1225 13:26:41.238925 1483946 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.key
	I1225 13:26:41.239060 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:26:41.239098 1483946 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:26:41.239122 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:26:41.239167 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:26:41.239204 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:26:41.239245 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:26:41.239300 1483946 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:41.240235 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:26:41.265422 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:26:41.290398 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:26:41.315296 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/embed-certs-880612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:26:41.339984 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:26:41.363071 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:26:41.392035 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:26:41.419673 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:26:41.444242 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:26:41.468314 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:26:41.493811 1483946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:26:41.518255 1483946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:26:41.535605 1483946 ssh_runner.go:195] Run: openssl version
	I1225 13:26:41.541254 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:26:41.551784 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.556610 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.556686 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:26:41.562299 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:26:41.572173 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:26:40.921702 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:40.922293 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:40.922335 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:40.922225 1484550 retry.go:31] will retry after 1.835505717s: waiting for machine to come up
	I1225 13:26:42.760187 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:42.760688 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:42.760714 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:42.760625 1484550 retry.go:31] will retry after 1.646709972s: waiting for machine to come up
	I1225 13:26:44.409540 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:44.410023 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:44.410064 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:44.409998 1484550 retry.go:31] will retry after 1.922870398s: waiting for machine to come up
	I1225 13:26:40.190712 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:40.190831 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:40.205624 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:40.690729 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:40.690835 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:40.702671 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.190145 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.190234 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.201991 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.690585 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.690683 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.704041 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.190633 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.190745 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.202086 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.690049 1483118 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.690177 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.701556 1483118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.701597 1483118 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:26:42.701611 1483118 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:26:42.701635 1483118 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:26:42.701719 1483118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:42.745733 1483118 cri.go:89] found id: ""
	I1225 13:26:42.745835 1483118 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:26:42.761355 1483118 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:26:42.773734 1483118 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:26:42.773812 1483118 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:42.785213 1483118 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:42.785242 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:42.927378 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:43.715163 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:43.934803 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:44.024379 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:44.106069 1483118 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:26:44.106200 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:44.607243 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:41.582062 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.692062 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.692156 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:26:41.698498 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:26:41.709171 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:26:41.719597 1483946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.724562 1483946 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.724628 1483946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:26:41.730571 1483946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:26:41.740854 1483946 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:26:41.745792 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:26:41.752228 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:26:41.758318 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:26:41.764486 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:26:41.770859 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:26:41.777155 1483946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:26:41.783382 1483946 kubeadm.go:404] StartCluster: {Name:embed-certs-880612 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-880612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:26:41.783493 1483946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:26:41.783557 1483946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:41.827659 1483946 cri.go:89] found id: ""
	I1225 13:26:41.827738 1483946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:26:41.837713 1483946 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:26:41.837740 1483946 kubeadm.go:636] restartCluster start
	I1225 13:26:41.837788 1483946 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:26:41.846668 1483946 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:41.847773 1483946 kubeconfig.go:92] found "embed-certs-880612" server: "https://192.168.50.179:8443"
	I1225 13:26:41.850105 1483946 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:26:41.859124 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:41.859196 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:41.870194 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.359810 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.359906 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.371508 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:42.860078 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:42.860167 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:42.876302 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:43.359657 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:43.359761 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:43.376765 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:43.859950 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:43.860067 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:43.878778 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:44.359355 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:44.359439 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:44.371780 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:44.859294 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:44.859429 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:44.872286 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:45.359315 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:45.359438 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:45.375926 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:45.859453 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:45.859560 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:45.875608 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:46.360180 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:46.360335 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:46.376143 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:46.335832 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:46.336405 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:46.336439 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:46.336342 1484550 retry.go:31] will retry after 2.75487061s: waiting for machine to come up
	I1225 13:26:49.092529 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:49.092962 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | unable to find current IP address of domain default-k8s-diff-port-344803 in network mk-default-k8s-diff-port-344803
	I1225 13:26:49.092986 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | I1225 13:26:49.092926 1484550 retry.go:31] will retry after 4.456958281s: waiting for machine to come up
	I1225 13:26:45.106806 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:45.607205 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.106726 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.606675 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:46.628821 1483118 api_server.go:72] duration metric: took 2.522750929s to wait for apiserver process to appear ...
	I1225 13:26:46.628852 1483118 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:26:46.628878 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:46.629487 1483118 api_server.go:269] stopped: https://192.168.72.232:8443/healthz: Get "https://192.168.72.232:8443/healthz": dial tcp 192.168.72.232:8443: connect: connection refused
	I1225 13:26:47.129325 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:46.860130 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:46.860255 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:46.875574 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:47.360120 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:47.360254 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:47.375470 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:47.860119 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:47.860205 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:47.875015 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:48.359513 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:48.359649 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:48.374270 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:48.859330 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:48.859424 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:48.871789 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:49.359307 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:49.359403 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:49.371339 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:49.859669 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:49.859766 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:49.872882 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.359345 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:50.359455 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:50.370602 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.859148 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:50.859271 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:50.871042 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:51.359459 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:51.359544 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:51.371252 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:50.824734 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:26:50.824772 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:26:50.824789 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:50.996870 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:50.996923 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:51.129079 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:51.134132 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:51.134169 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:51.629263 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:51.635273 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:26:51.635305 1483118 api_server.go:103] status: https://192.168.72.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:26:52.129955 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:26:52.135538 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 200:
	ok
	I1225 13:26:52.144432 1483118 api_server.go:141] control plane version: v1.29.0-rc.2
	I1225 13:26:52.144470 1483118 api_server.go:131] duration metric: took 5.515610636s to wait for apiserver health ...
	I1225 13:26:52.144483 1483118 cni.go:84] Creating CNI manager for ""
	I1225 13:26:52.144491 1483118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:26:52.146289 1483118 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:26:52.147684 1483118 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:26:52.187156 1483118 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:26:52.210022 1483118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:26:52.225137 1483118 system_pods.go:59] 8 kube-system pods found
	I1225 13:26:52.225190 1483118 system_pods.go:61] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:26:52.225200 1483118 system_pods.go:61] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:26:52.225218 1483118 system_pods.go:61] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:26:52.225230 1483118 system_pods.go:61] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:26:52.225239 1483118 system_pods.go:61] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:26:52.225248 1483118 system_pods.go:61] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:26:52.225262 1483118 system_pods.go:61] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:26:52.225272 1483118 system_pods.go:61] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 13:26:52.225288 1483118 system_pods.go:74] duration metric: took 15.241676ms to wait for pod list to return data ...
	I1225 13:26:52.225300 1483118 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:26:52.229429 1483118 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:26:52.229471 1483118 node_conditions.go:123] node cpu capacity is 2
	I1225 13:26:52.229527 1483118 node_conditions.go:105] duration metric: took 4.217644ms to run NodePressure ...
	I1225 13:26:52.229549 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.630596 1483118 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:26:52.635810 1483118 kubeadm.go:787] kubelet initialised
	I1225 13:26:52.635835 1483118 kubeadm.go:788] duration metric: took 5.192822ms waiting for restarted kubelet to initialise ...
	I1225 13:26:52.635844 1483118 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:26:52.645095 1483118 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.652146 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "coredns-76f75df574-pwk9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.652181 1483118 pod_ready.go:81] duration metric: took 7.040805ms waiting for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.652194 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "coredns-76f75df574-pwk9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.652203 1483118 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.658310 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "etcd-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.658347 1483118 pod_ready.go:81] duration metric: took 6.126503ms waiting for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.658359 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "etcd-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.658369 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.663826 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-apiserver-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.663871 1483118 pod_ready.go:81] duration metric: took 5.492644ms waiting for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.663884 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-apiserver-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.663893 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:52.669098 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.669137 1483118 pod_ready.go:81] duration metric: took 5.230934ms waiting for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:52.669148 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:52.669157 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.035736 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-proxy-jbch6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.035782 1483118 pod_ready.go:81] duration metric: took 366.614624ms waiting for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.035796 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-proxy-jbch6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.035806 1483118 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.435089 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "kube-scheduler-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.435123 1483118 pod_ready.go:81] duration metric: took 399.30822ms waiting for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.435135 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "kube-scheduler-no-preload-330063" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.435145 1483118 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	I1225 13:26:53.835248 1483118 pod_ready.go:97] node "no-preload-330063" hosting pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.835280 1483118 pod_ready.go:81] duration metric: took 400.124904ms waiting for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	E1225 13:26:53.835290 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-330063" hosting pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:53.835299 1483118 pod_ready.go:38] duration metric: took 1.199443126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:26:53.835317 1483118 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:26:53.848912 1483118 ops.go:34] apiserver oom_adj: -16
	I1225 13:26:53.848954 1483118 kubeadm.go:640] restartCluster took 21.184297233s
	I1225 13:26:53.848965 1483118 kubeadm.go:406] StartCluster complete in 21.235197323s
	I1225 13:26:53.849001 1483118 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:53.849140 1483118 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:26:53.851909 1483118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:26:53.852278 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:26:53.852353 1483118 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:26:53.852461 1483118 addons.go:69] Setting storage-provisioner=true in profile "no-preload-330063"
	I1225 13:26:53.852495 1483118 addons.go:237] Setting addon storage-provisioner=true in "no-preload-330063"
	W1225 13:26:53.852507 1483118 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:26:53.852514 1483118 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:26:53.852555 1483118 addons.go:69] Setting default-storageclass=true in profile "no-preload-330063"
	I1225 13:26:53.852579 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.852607 1483118 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-330063"
	I1225 13:26:53.852871 1483118 addons.go:69] Setting metrics-server=true in profile "no-preload-330063"
	I1225 13:26:53.852895 1483118 addons.go:237] Setting addon metrics-server=true in "no-preload-330063"
	W1225 13:26:53.852904 1483118 addons.go:246] addon metrics-server should already be in state true
	I1225 13:26:53.852948 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.852985 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.852985 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.853012 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.853012 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.853315 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.853361 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.858023 1483118 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-330063" context rescaled to 1 replicas
	I1225 13:26:53.858077 1483118 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:26:53.861368 1483118 out.go:177] * Verifying Kubernetes components...
	I1225 13:26:53.862819 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:26:53.870209 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
	I1225 13:26:53.870486 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I1225 13:26:53.870693 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.870807 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.871066 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I1225 13:26:53.871329 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.871341 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.871426 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.871433 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.871742 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.871770 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.872271 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.872308 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.872511 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.872896 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.872923 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.873167 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.873180 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.873549 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.873693 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.878043 1483118 addons.go:237] Setting addon default-storageclass=true in "no-preload-330063"
	W1225 13:26:53.878077 1483118 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:26:53.878117 1483118 host.go:66] Checking if "no-preload-330063" exists ...
	I1225 13:26:53.878613 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.878657 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.891971 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I1225 13:26:53.892418 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.893067 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.893092 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.893461 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.893634 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.895563 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.897916 1483118 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:26:53.896007 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I1225 13:26:53.899799 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:26:53.899823 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:26:53.899858 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.900294 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.900987 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.901006 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.901451 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.901677 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.901677 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I1225 13:26:53.902344 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.902956 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.902981 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.903419 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.903917 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.903986 1483118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:53.904022 1483118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:53.904445 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.904452 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.904471 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.904615 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.904785 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.906582 1483118 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:26:53.551972 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.552449 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Found IP for machine: 192.168.61.39
	I1225 13:26:53.552500 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has current primary IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.552515 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Reserving static IP address...
	I1225 13:26:53.552918 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-344803", mac: "52:54:00:80:85:71", ip: "192.168.61.39"} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.552967 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | skip adding static IP to network mk-default-k8s-diff-port-344803 - found existing host DHCP lease matching {name: "default-k8s-diff-port-344803", mac: "52:54:00:80:85:71", ip: "192.168.61.39"}
	I1225 13:26:53.552990 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Reserved static IP address: 192.168.61.39
	I1225 13:26:53.553003 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Waiting for SSH to be available...
	I1225 13:26:53.553041 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Getting to WaitForSSH function...
	I1225 13:26:53.555272 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.555619 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.555654 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.555753 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Using SSH client type: external
	I1225 13:26:53.555785 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa (-rw-------)
	I1225 13:26:53.555828 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:26:53.555852 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | About to run SSH command:
	I1225 13:26:53.555872 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | exit 0
	I1225 13:26:53.642574 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | SSH cmd err, output: <nil>: 
	I1225 13:26:53.643094 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetConfigRaw
	I1225 13:26:53.643946 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:53.646842 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.647308 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.647351 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.647580 1484104 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/config.json ...
	I1225 13:26:53.647806 1484104 machine.go:88] provisioning docker machine ...
	I1225 13:26:53.647827 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:53.648054 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.648255 1484104 buildroot.go:166] provisioning hostname "default-k8s-diff-port-344803"
	I1225 13:26:53.648274 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.648485 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.650935 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.651291 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.651327 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.651479 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:53.651718 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.651887 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.652028 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:53.652213 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:53.652587 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:53.652605 1484104 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-344803 && echo "default-k8s-diff-port-344803" | sudo tee /etc/hostname
	I1225 13:26:53.782284 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-344803
	
	I1225 13:26:53.782315 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.785326 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.785631 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.785668 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.785876 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:53.786149 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.786374 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:53.786600 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:53.786806 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:53.787202 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:53.787222 1484104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-344803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-344803/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-344803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:26:53.916809 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:26:53.916844 1484104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:26:53.916870 1484104 buildroot.go:174] setting up certificates
	I1225 13:26:53.916882 1484104 provision.go:83] configureAuth start
	I1225 13:26:53.916900 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetMachineName
	I1225 13:26:53.917233 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:53.920048 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.920377 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.920402 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.920538 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:53.923177 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.923404 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:53.923437 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:53.923584 1484104 provision.go:138] copyHostCerts
	I1225 13:26:53.923666 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:26:53.923686 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:26:53.923763 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:26:53.923934 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:26:53.923947 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:26:53.923978 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:26:53.924078 1484104 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:26:53.924088 1484104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:26:53.924115 1484104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:26:53.924207 1484104 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-344803 san=[192.168.61.39 192.168.61.39 localhost 127.0.0.1 minikube default-k8s-diff-port-344803]
	I1225 13:26:54.014673 1484104 provision.go:172] copyRemoteCerts
	I1225 13:26:54.014739 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:26:54.014772 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.018361 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.018849 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.018924 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.019089 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.019351 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.019559 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.019949 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.120711 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:26:54.155907 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1225 13:26:54.192829 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 13:26:54.227819 1484104 provision.go:86] duration metric: configureAuth took 310.912829ms
	I1225 13:26:54.227853 1484104 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:26:54.228119 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:26:54.228236 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.232535 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.232580 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.232628 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.232945 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.233215 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.233422 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.233608 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.233801 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:54.234295 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:54.234322 1484104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:26:54.638656 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:26:54.638772 1484104 machine.go:91] provisioned docker machine in 990.950916ms
	I1225 13:26:54.638798 1484104 start.go:300] post-start starting for "default-k8s-diff-port-344803" (driver="kvm2")
	I1225 13:26:54.638821 1484104 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:26:54.638883 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.639341 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:26:54.639383 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.643369 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.643810 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.643863 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.644140 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.644444 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.644624 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.644774 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.740189 1484104 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:26:54.745972 1484104 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:26:54.746009 1484104 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:26:54.746104 1484104 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:26:54.746229 1484104 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:26:54.746362 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:26:54.758199 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:26:54.794013 1484104 start.go:303] post-start completed in 155.186268ms
	I1225 13:26:54.794048 1484104 fix.go:56] fixHost completed within 20.354368879s
	I1225 13:26:54.794077 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.797620 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.798092 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.798129 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.798423 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.798692 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.798900 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.799067 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.799293 1484104 main.go:141] libmachine: Using SSH client type: native
	I1225 13:26:54.799807 1484104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.61.39 22 <nil> <nil>}
	I1225 13:26:54.799829 1484104 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:26:54.933026 1482618 start.go:369] acquired machines lock for "old-k8s-version-198979" in 59.553202424s
	I1225 13:26:54.933097 1482618 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:26:54.933105 1482618 fix.go:54] fixHost starting: 
	I1225 13:26:54.933577 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:26:54.933620 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:26:54.956206 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I1225 13:26:54.956801 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:54.958396 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:26:54.958425 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:54.958887 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:54.959164 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:26:54.959384 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:26:54.961270 1482618 fix.go:102] recreateIfNeeded on old-k8s-version-198979: state=Stopped err=<nil>
	I1225 13:26:54.961305 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	W1225 13:26:54.961494 1482618 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:26:54.963775 1482618 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-198979" ...
	I1225 13:26:53.904908 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.908114 1483118 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:26:53.908130 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:26:53.908147 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.908370 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:53.912254 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.912861 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.912885 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.913096 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.913324 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.913510 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.913629 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:53.959638 1483118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
	I1225 13:26:53.960190 1483118 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:26:53.960890 1483118 main.go:141] libmachine: Using API Version  1
	I1225 13:26:53.960913 1483118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:26:53.961320 1483118 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:26:53.961603 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetState
	I1225 13:26:53.963927 1483118 main.go:141] libmachine: (no-preload-330063) Calling .DriverName
	I1225 13:26:53.964240 1483118 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:26:53.964262 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:26:53.964282 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHHostname
	I1225 13:26:53.967614 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.968092 1483118 main.go:141] libmachine: (no-preload-330063) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:c3:b6", ip: ""} in network mk-no-preload-330063: {Iface:virbr3 ExpiryTime:2023-12-25 14:26:03 +0000 UTC Type:0 Mac:52:54:00:e9:c3:b6 Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:no-preload-330063 Clientid:01:52:54:00:e9:c3:b6}
	I1225 13:26:53.968155 1483118 main.go:141] libmachine: (no-preload-330063) DBG | domain no-preload-330063 has defined IP address 192.168.72.232 and MAC address 52:54:00:e9:c3:b6 in network mk-no-preload-330063
	I1225 13:26:53.968471 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHPort
	I1225 13:26:53.968679 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHKeyPath
	I1225 13:26:53.968879 1483118 main.go:141] libmachine: (no-preload-330063) Calling .GetSSHUsername
	I1225 13:26:53.969040 1483118 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/no-preload-330063/id_rsa Username:docker}
	I1225 13:26:54.064639 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:26:54.064674 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:26:54.093609 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:26:54.147415 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:26:54.147449 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:26:54.148976 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:26:54.160381 1483118 node_ready.go:35] waiting up to 6m0s for node "no-preload-330063" to be "Ready" ...
	I1225 13:26:54.160490 1483118 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:26:54.202209 1483118 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:26:54.202242 1483118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:26:54.276251 1483118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:26:54.965270 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Start
	I1225 13:26:54.965680 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring networks are active...
	I1225 13:26:54.966477 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring network default is active
	I1225 13:26:54.966919 1482618 main.go:141] libmachine: (old-k8s-version-198979) Ensuring network mk-old-k8s-version-198979 is active
	I1225 13:26:54.967420 1482618 main.go:141] libmachine: (old-k8s-version-198979) Getting domain xml...
	I1225 13:26:54.968585 1482618 main.go:141] libmachine: (old-k8s-version-198979) Creating domain...
	I1225 13:26:55.590940 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.497275379s)
	I1225 13:26:55.591005 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591020 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.591108 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.442107411s)
	I1225 13:26:55.591127 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591136 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.591247 1483118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.314957717s)
	I1225 13:26:55.591268 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.591280 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.595765 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.595838 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.595847 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.595859 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.595867 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596016 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596049 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596058 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596067 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.596075 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596177 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596218 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596226 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596236 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.596244 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.596485 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596515 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596929 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.596972 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.596979 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.596990 1483118 addons.go:473] Verifying addon metrics-server=true in "no-preload-330063"
	I1225 13:26:55.597032 1483118 main.go:141] libmachine: (no-preload-330063) DBG | Closing plugin on server side
	I1225 13:26:55.597067 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.597076 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.610755 1483118 main.go:141] libmachine: Making call to close driver server
	I1225 13:26:55.610788 1483118 main.go:141] libmachine: (no-preload-330063) Calling .Close
	I1225 13:26:55.611238 1483118 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:26:55.611264 1483118 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:26:55.613767 1483118 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1225 13:26:51.859989 1483946 api_server.go:166] Checking apiserver status ...
	I1225 13:26:51.860081 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:26:51.871647 1483946 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:26:51.871684 1483946 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:26:51.871709 1483946 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:26:51.871725 1483946 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:26:51.871817 1483946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:26:51.919587 1483946 cri.go:89] found id: ""
	I1225 13:26:51.919706 1483946 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:26:51.935341 1483946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:26:51.944522 1483946 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:26:51.944588 1483946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:51.954607 1483946 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:26:51.954637 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.092831 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:52.921485 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.161902 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.270786 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:26:53.340226 1483946 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:26:53.340331 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:53.841309 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:54.341486 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:54.841104 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.341409 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.841238 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:26:55.867371 1483946 api_server.go:72] duration metric: took 2.52714535s to wait for apiserver process to appear ...
	I1225 13:26:55.867406 1483946 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:26:55.867434 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:26:55.867970 1483946 api_server.go:269] stopped: https://192.168.50.179:8443/healthz: Get "https://192.168.50.179:8443/healthz": dial tcp 192.168.50.179:8443: connect: connection refused
	I1225 13:26:56.368335 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:26:54.932810 1484104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510814.876127642
	
	I1225 13:26:54.932838 1484104 fix.go:206] guest clock: 1703510814.876127642
	I1225 13:26:54.932848 1484104 fix.go:219] Guest: 2023-12-25 13:26:54.876127642 +0000 UTC Remote: 2023-12-25 13:26:54.794053361 +0000 UTC m=+104.977714576 (delta=82.074281ms)
	I1225 13:26:54.932878 1484104 fix.go:190] guest clock delta is within tolerance: 82.074281ms
	I1225 13:26:54.932885 1484104 start.go:83] releasing machines lock for "default-k8s-diff-port-344803", held for 20.493256775s
	I1225 13:26:54.932920 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.933380 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:54.936626 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.937209 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.937262 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.937534 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938366 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938583 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:26:54.938676 1484104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:26:54.938722 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.938826 1484104 ssh_runner.go:195] Run: cat /version.json
	I1225 13:26:54.938854 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:26:54.942392 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.942792 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.942831 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.943292 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.943487 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.943635 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.943764 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:54.943922 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.944870 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:54.945020 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:54.945066 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:26:54.945318 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:26:54.945498 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:26:54.945743 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:26:55.069674 1484104 ssh_runner.go:195] Run: systemctl --version
	I1225 13:26:55.078333 1484104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:26:55.247706 1484104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:26:55.256782 1484104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:26:55.256885 1484104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:26:55.278269 1484104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:26:55.278303 1484104 start.go:475] detecting cgroup driver to use...
	I1225 13:26:55.278383 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:26:55.302307 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:26:55.322161 1484104 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:26:55.322345 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:26:55.342241 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:26:55.361128 1484104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:26:55.547880 1484104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:26:55.693711 1484104 docker.go:219] disabling docker service ...
	I1225 13:26:55.693804 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:26:55.708058 1484104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:26:55.721136 1484104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:26:55.890044 1484104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:26:56.042549 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:26:56.061359 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:26:56.086075 1484104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:26:56.086169 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.100059 1484104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:26:56.100162 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.113858 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.127589 1484104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:26:56.140964 1484104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:26:56.155180 1484104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:26:56.167585 1484104 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:26:56.167716 1484104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:26:56.186467 1484104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:26:56.200044 1484104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:26:56.339507 1484104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:26:56.563294 1484104 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:26:56.563385 1484104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:26:56.570381 1484104 start.go:543] Will wait 60s for crictl version
	I1225 13:26:56.570477 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:26:56.575675 1484104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:26:56.617219 1484104 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:26:56.617322 1484104 ssh_runner.go:195] Run: crio --version
	I1225 13:26:56.679138 1484104 ssh_runner.go:195] Run: crio --version
	I1225 13:26:56.751125 1484104 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1225 13:26:56.752677 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetIP
	I1225 13:26:56.756612 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:56.757108 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:26:56.757142 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:26:56.757502 1484104 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1225 13:26:56.763739 1484104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:26:56.781952 1484104 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:26:56.782029 1484104 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:26:56.840852 1484104 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1225 13:26:56.840939 1484104 ssh_runner.go:195] Run: which lz4
	I1225 13:26:56.845412 1484104 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:26:56.850135 1484104 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:26:56.850181 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1225 13:26:58.731034 1484104 crio.go:444] Took 1.885656 seconds to copy over tarball
	I1225 13:26:58.731138 1484104 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:26:55.615056 1483118 addons.go:508] enable addons completed in 1.762702944s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1225 13:26:56.169115 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:58.665700 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:26:56.860066 1482618 main.go:141] libmachine: (old-k8s-version-198979) Waiting to get IP...
	I1225 13:26:56.860987 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:56.861644 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:56.861765 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:56.861626 1484760 retry.go:31] will retry after 198.102922ms: waiting for machine to come up
	I1225 13:26:57.061281 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.062001 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.062029 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.061907 1484760 retry.go:31] will retry after 299.469436ms: waiting for machine to come up
	I1225 13:26:57.362874 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.363385 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.363441 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.363363 1484760 retry.go:31] will retry after 460.796393ms: waiting for machine to come up
	I1225 13:26:57.826330 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:57.827065 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:57.827098 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:57.827021 1484760 retry.go:31] will retry after 397.690798ms: waiting for machine to come up
	I1225 13:26:58.226942 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:58.227490 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:58.227528 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:58.227437 1484760 retry.go:31] will retry after 731.798943ms: waiting for machine to come up
	I1225 13:26:58.960490 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:58.960969 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:58.961000 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:58.960930 1484760 retry.go:31] will retry after 577.614149ms: waiting for machine to come up
	I1225 13:26:59.540871 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:26:59.541581 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:26:59.541607 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:26:59.541494 1484760 retry.go:31] will retry after 1.177902051s: waiting for machine to come up
	I1225 13:27:00.799310 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.799355 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:00.799376 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:00.905272 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.905311 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:00.905330 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:00.922285 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:00.922324 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:01.367590 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:01.374093 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:01.374155 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.440592 1484104 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.709419632s)
	I1225 13:27:02.440624 1484104 crio.go:451] Took 3.709555 seconds to extract the tarball
	I1225 13:27:02.440636 1484104 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:27:02.504136 1484104 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:02.613720 1484104 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 13:27:02.613752 1484104 cache_images.go:84] Images are preloaded, skipping loading
	I1225 13:27:02.613839 1484104 ssh_runner.go:195] Run: crio config
	I1225 13:27:02.685414 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:27:02.685436 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:02.685459 1484104 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:27:02.685477 1484104 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.39 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-344803 NodeName:default-k8s-diff-port-344803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:27:02.685627 1484104 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.39
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-344803"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:27:02.685710 1484104 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-344803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1225 13:27:02.685778 1484104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1225 13:27:02.696327 1484104 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:27:02.696420 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:27:02.707369 1484104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1225 13:27:02.728181 1484104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:27:02.748934 1484104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1225 13:27:02.770783 1484104 ssh_runner.go:195] Run: grep 192.168.61.39	control-plane.minikube.internal$ /etc/hosts
	I1225 13:27:02.775946 1484104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:02.790540 1484104 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803 for IP: 192.168.61.39
	I1225 13:27:02.790590 1484104 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:02.790792 1484104 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:27:02.790862 1484104 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:27:02.790961 1484104 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.key
	I1225 13:27:02.859647 1484104 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.key.daee23f3
	I1225 13:27:02.859773 1484104 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.key
	I1225 13:27:02.859934 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:27:02.859993 1484104 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:27:02.860010 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:27:02.860037 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:27:02.860061 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:27:02.860082 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:27:02.860121 1484104 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:02.860871 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:27:02.889354 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1225 13:27:02.916983 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:27:02.943348 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:27:02.969940 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:27:02.996224 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:27:03.021662 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:27:03.052589 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:27:03.080437 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:27:03.107973 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:27:03.134921 1484104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:27:03.161948 1484104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:27:03.184606 1484104 ssh_runner.go:195] Run: openssl version
	I1225 13:27:03.192305 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:27:03.204868 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.209793 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.209895 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:27:03.216568 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:27:03.229131 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:27:03.241634 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.247328 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.247397 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:03.253730 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:27:03.267063 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:27:03.281957 1484104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.288393 1484104 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.288481 1484104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:27:03.295335 1484104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:27:03.307900 1484104 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:27:03.313207 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:27:03.319949 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:27:03.327223 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:27:03.333927 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:27:03.341434 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:27:03.349298 1484104 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:27:03.356303 1484104 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-344803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-344803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:27:03.356409 1484104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:27:03.356463 1484104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:03.407914 1484104 cri.go:89] found id: ""
	I1225 13:27:03.407991 1484104 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:27:03.418903 1484104 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:27:03.418928 1484104 kubeadm.go:636] restartCluster start
	I1225 13:27:03.418981 1484104 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:27:03.429758 1484104 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:03.431242 1484104 kubeconfig.go:92] found "default-k8s-diff-port-344803" server: "https://192.168.61.39:8444"
	I1225 13:27:03.433847 1484104 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:27:03.443564 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:03.443648 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:03.457188 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:03.943692 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:03.943806 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:03.956490 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:04.443680 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:04.443781 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:04.464817 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:00.671397 1483118 node_ready.go:58] node "no-preload-330063" has status "Ready":"False"
	I1225 13:27:01.665347 1483118 node_ready.go:49] node "no-preload-330063" has status "Ready":"True"
	I1225 13:27:01.665383 1483118 node_ready.go:38] duration metric: took 7.504959726s waiting for node "no-preload-330063" to be "Ready" ...
	I1225 13:27:01.665398 1483118 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:01.675515 1483118 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.688377 1483118 pod_ready.go:92] pod "coredns-76f75df574-pwk9h" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:01.688467 1483118 pod_ready.go:81] duration metric: took 12.819049ms waiting for pod "coredns-76f75df574-pwk9h" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.688492 1483118 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:03.697007 1483118 pod_ready.go:102] pod "etcd-no-preload-330063" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:04.379595 1483118 pod_ready.go:92] pod "etcd-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.379628 1483118 pod_ready.go:81] duration metric: took 2.691119222s waiting for pod "etcd-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.379643 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.393427 1483118 pod_ready.go:92] pod "kube-apiserver-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.393459 1483118 pod_ready.go:81] duration metric: took 13.806505ms waiting for pod "kube-apiserver-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.393473 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.454291 1483118 pod_ready.go:92] pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.454387 1483118 pod_ready.go:81] duration metric: took 60.903507ms waiting for pod "kube-controller-manager-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.454417 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.525436 1483118 pod_ready.go:92] pod "kube-proxy-jbch6" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.525471 1483118 pod_ready.go:81] duration metric: took 71.040817ms waiting for pod "kube-proxy-jbch6" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.525486 1483118 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.546670 1483118 pod_ready.go:92] pod "kube-scheduler-no-preload-330063" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:04.546709 1483118 pod_ready.go:81] duration metric: took 21.213348ms waiting for pod "kube-scheduler-no-preload-330063" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:04.546726 1483118 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:01.868308 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:01.913335 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:01.913393 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.367660 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:02.375382 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:02.375424 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:02.867590 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:02.873638 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:02.873680 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:03.368014 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:03.377785 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:03.377827 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:03.867933 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:03.873979 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:03.874013 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:04.367576 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:04.377835 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:04.377884 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:04.868444 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:04.879138 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:04.879187 1483946 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:05.367519 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:27:05.377570 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 200:
	ok
	I1225 13:27:05.388572 1483946 api_server.go:141] control plane version: v1.28.4
	I1225 13:27:05.388605 1483946 api_server.go:131] duration metric: took 9.521192442s to wait for apiserver health ...
	I1225 13:27:05.388615 1483946 cni.go:84] Creating CNI manager for ""
	I1225 13:27:05.388625 1483946 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:05.390592 1483946 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:00.720918 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:00.721430 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:00.721457 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:00.721380 1484760 retry.go:31] will retry after 931.125211ms: waiting for machine to come up
	I1225 13:27:01.654661 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:01.655341 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:01.655367 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:01.655263 1484760 retry.go:31] will retry after 1.333090932s: waiting for machine to come up
	I1225 13:27:02.991018 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:02.991520 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:02.991555 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:02.991468 1484760 retry.go:31] will retry after 2.006684909s: waiting for machine to come up
	I1225 13:27:05.000424 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:05.000972 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:05.001023 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:05.000908 1484760 retry.go:31] will retry after 2.72499386s: waiting for machine to come up
	I1225 13:27:05.391952 1483946 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:05.406622 1483946 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:05.429599 1483946 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:05.441614 1483946 system_pods.go:59] 9 kube-system pods found
	I1225 13:27:05.441681 1483946 system_pods.go:61] "coredns-5dd5756b68-4jqz4" [026524a6-1f73-4644-8a80-b276326178b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:05.441698 1483946 system_pods.go:61] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:05.441710 1483946 system_pods.go:61] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:27:05.441721 1483946 system_pods.go:61] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:27:05.441732 1483946 system_pods.go:61] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:27:05.441746 1483946 system_pods.go:61] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:27:05.441758 1483946 system_pods.go:61] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:27:05.441773 1483946 system_pods.go:61] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:27:05.441790 1483946 system_pods.go:61] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:27:05.441812 1483946 system_pods.go:74] duration metric: took 12.174684ms to wait for pod list to return data ...
	I1225 13:27:05.441824 1483946 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:05.447018 1483946 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:05.447064 1483946 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:05.447079 1483946 node_conditions.go:105] duration metric: took 5.247366ms to run NodePressure ...
	I1225 13:27:05.447106 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:05.767972 1483946 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:05.774281 1483946 kubeadm.go:787] kubelet initialised
	I1225 13:27:05.774307 1483946 kubeadm.go:788] duration metric: took 6.300121ms waiting for restarted kubelet to initialise ...
	I1225 13:27:05.774316 1483946 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:05.781474 1483946 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.789698 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.789732 1483946 pod_ready.go:81] duration metric: took 8.22748ms waiting for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.789746 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.789758 1483946 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.798517 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.798584 1483946 pod_ready.go:81] duration metric: took 8.811967ms waiting for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.798601 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.798612 1483946 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.804958 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "etcd-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.804998 1483946 pod_ready.go:81] duration metric: took 6.356394ms waiting for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.805018 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "etcd-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.805028 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:05.834502 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.834549 1483946 pod_ready.go:81] duration metric: took 29.510044ms waiting for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:05.834561 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:05.834571 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:06.234676 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.234728 1483946 pod_ready.go:81] duration metric: took 400.145957ms waiting for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:06.234742 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.234752 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:06.634745 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-proxy-677d7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.634785 1483946 pod_ready.go:81] duration metric: took 400.019189ms waiting for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:06.634798 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-proxy-677d7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:06.634807 1483946 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:07.034762 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.034793 1483946 pod_ready.go:81] duration metric: took 399.977148ms waiting for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:07.034803 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.034810 1483946 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:07.433932 1483946 pod_ready.go:97] node "embed-certs-880612" hosting pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.433969 1483946 pod_ready.go:81] duration metric: took 399.14889ms waiting for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:07.433982 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-880612" hosting pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880612" has status "Ready":"False"
	I1225 13:27:07.433992 1483946 pod_ready.go:38] duration metric: took 1.659666883s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:07.434016 1483946 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:27:07.448377 1483946 ops.go:34] apiserver oom_adj: -16
	I1225 13:27:07.448405 1483946 kubeadm.go:640] restartCluster took 25.610658268s
	I1225 13:27:07.448415 1483946 kubeadm.go:406] StartCluster complete in 25.665045171s
	I1225 13:27:07.448443 1483946 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:07.448530 1483946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:27:07.451369 1483946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:07.453102 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:27:07.453244 1483946 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:27:07.453332 1483946 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880612"
	I1225 13:27:07.453351 1483946 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-880612"
	W1225 13:27:07.453363 1483946 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:27:07.453432 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.453450 1483946 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:27:07.453516 1483946 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880612"
	I1225 13:27:07.453536 1483946 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880612"
	I1225 13:27:07.453860 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.453870 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.453902 1483946 addons.go:69] Setting metrics-server=true in profile "embed-certs-880612"
	I1225 13:27:07.453917 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.453925 1483946 addons.go:237] Setting addon metrics-server=true in "embed-certs-880612"
	W1225 13:27:07.454160 1483946 addons.go:246] addon metrics-server should already be in state true
	I1225 13:27:07.454211 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.453903 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.454601 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.454669 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.476508 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I1225 13:27:07.476720 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I1225 13:27:07.477202 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.477210 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.477794 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.477815 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.477957 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.477971 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.478407 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.478478 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.479041 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.479083 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.480350 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.483762 1483946 addons.go:237] Setting addon default-storageclass=true in "embed-certs-880612"
	W1225 13:27:07.483783 1483946 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:27:07.483816 1483946 host.go:66] Checking if "embed-certs-880612" exists ...
	I1225 13:27:07.484249 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.484285 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.489369 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I1225 13:27:07.489817 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.490332 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.490354 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.491339 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.494037 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.494083 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.501003 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I1225 13:27:07.501737 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.502399 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.502422 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.502882 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.503092 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.505387 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.507725 1483946 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:07.509099 1483946 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:07.509121 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:27:07.509153 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.513153 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.513923 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.513957 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.514226 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.514426 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.514610 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.515190 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.516933 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38615
	I1225 13:27:07.517681 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.518194 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.518220 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.518784 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1225 13:27:07.519309 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.519400 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.519930 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.519956 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.520525 1483946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:07.520573 1483946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:07.520819 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.521050 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.523074 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.525265 1483946 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:27:07.526542 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:27:07.526569 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:27:07.526598 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.530316 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.530846 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.530883 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.531223 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.531571 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.531832 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.532070 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.544917 1483946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I1225 13:27:07.545482 1483946 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:07.546037 1483946 main.go:141] libmachine: Using API Version  1
	I1225 13:27:07.546059 1483946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:07.546492 1483946 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:07.546850 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetState
	I1225 13:27:07.548902 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .DriverName
	I1225 13:27:07.549177 1483946 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:07.549196 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:27:07.549218 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHHostname
	I1225 13:27:07.553036 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.553541 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ab:67", ip: ""} in network mk-embed-certs-880612: {Iface:virbr2 ExpiryTime:2023-12-25 14:26:26 +0000 UTC Type:0 Mac:52:54:00:a2:ab:67 Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:embed-certs-880612 Clientid:01:52:54:00:a2:ab:67}
	I1225 13:27:07.553572 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | domain embed-certs-880612 has defined IP address 192.168.50.179 and MAC address 52:54:00:a2:ab:67 in network mk-embed-certs-880612
	I1225 13:27:07.553784 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHPort
	I1225 13:27:07.554642 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHKeyPath
	I1225 13:27:07.554893 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .GetSSHUsername
	I1225 13:27:07.555581 1483946 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/embed-certs-880612/id_rsa Username:docker}
	I1225 13:27:07.676244 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:07.704310 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:07.718012 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:27:07.718043 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:27:07.779041 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:27:07.779073 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:27:07.786154 1483946 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:27:07.812338 1483946 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:07.812373 1483946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:27:07.837795 1483946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:07.974099 1483946 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-880612" context rescaled to 1 replicas
	I1225 13:27:07.974158 1483946 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:27:07.977116 1483946 out.go:177] * Verifying Kubernetes components...
	I1225 13:27:07.978618 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:27:09.163988 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.459630406s)
	I1225 13:27:09.164059 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164073 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164091 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.487803106s)
	I1225 13:27:09.164129 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164149 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164617 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.164624 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.164629 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.164639 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.164641 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.164651 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164653 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.164661 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164666 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.164622 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165025 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165056 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.165095 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.165121 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.165172 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.165186 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.188483 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.188510 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.188847 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.188898 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.188906 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.193684 1483946 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.215023208s)
	I1225 13:27:09.193736 1483946 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880612" to be "Ready" ...
	I1225 13:27:09.193789 1483946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.355953438s)
	I1225 13:27:09.193825 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.193842 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.194176 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.194192 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.194208 1483946 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:09.194219 1483946 main.go:141] libmachine: (embed-certs-880612) Calling .Close
	I1225 13:27:09.195998 1483946 main.go:141] libmachine: (embed-certs-880612) DBG | Closing plugin on server side
	I1225 13:27:09.196000 1483946 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:09.196033 1483946 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:09.196044 1483946 addons.go:473] Verifying addon metrics-server=true in "embed-certs-880612"
	I1225 13:27:09.198211 1483946 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:27:04.943819 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:04.943958 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:04.960056 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:05.443699 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:05.443795 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:05.461083 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:05.943713 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:05.943821 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:05.960712 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.444221 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:06.444305 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:06.458894 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.944546 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:06.944630 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:06.958754 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:07.444332 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:07.444462 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:07.491468 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:07.943982 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:07.944135 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:07.960697 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:08.444285 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:08.444408 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:08.461209 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:08.943720 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:08.943866 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:08.959990 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:09.444604 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:09.444727 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:09.463020 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:06.556605 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:08.560748 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:07.728505 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:07.728994 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:07.729023 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:07.728936 1484760 retry.go:31] will retry after 2.39810797s: waiting for machine to come up
	I1225 13:27:10.129402 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:10.129925 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:10.129960 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:10.129860 1484760 retry.go:31] will retry after 4.278491095s: waiting for machine to come up
	I1225 13:27:09.199531 1483946 addons.go:508] enable addons completed in 1.746293071s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:27:11.199503 1483946 node_ready.go:49] node "embed-certs-880612" has status "Ready":"True"
	I1225 13:27:11.199529 1483946 node_ready.go:38] duration metric: took 2.005779632s waiting for node "embed-certs-880612" to be "Ready" ...
	I1225 13:27:11.199541 1483946 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:11.207447 1483946 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:09.943841 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:09.943948 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:09.960478 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:10.444037 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:10.444309 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:10.463480 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:10.943760 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:10.943886 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:10.960191 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:11.444602 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:11.444702 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:11.458181 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:11.943674 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:11.943783 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:11.956418 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:12.443719 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:12.443835 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:12.456707 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:12.944332 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:12.944434 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:12.957217 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:13.443965 1484104 api_server.go:166] Checking apiserver status ...
	I1225 13:27:13.444076 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:13.455968 1484104 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:13.456008 1484104 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:27:13.456051 1484104 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:27:13.456067 1484104 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:27:13.456145 1484104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:13.497063 1484104 cri.go:89] found id: ""
	I1225 13:27:13.497135 1484104 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:27:13.513279 1484104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:27:13.522816 1484104 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:27:13.522885 1484104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:13.532580 1484104 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:13.532612 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:13.668876 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:14.848056 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.179140695s)
	I1225 13:27:14.848090 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:11.072420 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:13.555685 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:14.413456 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:14.414013 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | unable to find current IP address of domain old-k8s-version-198979 in network mk-old-k8s-version-198979
	I1225 13:27:14.414043 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | I1225 13:27:14.413960 1484760 retry.go:31] will retry after 4.470102249s: waiting for machine to come up
	I1225 13:27:11.714710 1483946 pod_ready.go:92] pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.714747 1483946 pod_ready.go:81] duration metric: took 507.263948ms waiting for pod "coredns-5dd5756b68-4jqz4" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.714760 1483946 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.720448 1483946 pod_ready.go:92] pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.720472 1483946 pod_ready.go:81] duration metric: took 5.705367ms waiting for pod "coredns-5dd5756b68-sbn7n" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.720481 1483946 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.725691 1483946 pod_ready.go:92] pod "etcd-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:11.725717 1483946 pod_ready.go:81] duration metric: took 5.229718ms waiting for pod "etcd-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:11.725725 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.238949 1483946 pod_ready.go:92] pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.238979 1483946 pod_ready.go:81] duration metric: took 1.513246575s waiting for pod "kube-apiserver-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.238992 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.244957 1483946 pod_ready.go:92] pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.244980 1483946 pod_ready.go:81] duration metric: took 5.981457ms waiting for pod "kube-controller-manager-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.244991 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.609255 1483946 pod_ready.go:92] pod "kube-proxy-677d7" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:13.609282 1483946 pod_ready.go:81] duration metric: took 364.285426ms waiting for pod "kube-proxy-677d7" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:13.609292 1483946 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.621505 1483946 pod_ready.go:92] pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:15.621540 1483946 pod_ready.go:81] duration metric: took 2.012239726s waiting for pod "kube-scheduler-embed-certs-880612" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.621553 1483946 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:15.047153 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:15.142405 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:15.237295 1484104 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:27:15.237406 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:15.737788 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:16.238003 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:16.738328 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:17.238494 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:17.738177 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:18.237676 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:18.259279 1484104 api_server.go:72] duration metric: took 3.021983877s to wait for apiserver process to appear ...
	I1225 13:27:18.259305 1484104 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:27:18.259331 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:15.555810 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:18.056361 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:18.888547 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.889138 1482618 main.go:141] libmachine: (old-k8s-version-198979) Found IP for machine: 192.168.39.186
	I1225 13:27:18.889167 1482618 main.go:141] libmachine: (old-k8s-version-198979) Reserving static IP address...
	I1225 13:27:18.889183 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has current primary IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.889631 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "old-k8s-version-198979", mac: "52:54:00:a1:03:69", ip: "192.168.39.186"} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.889672 1482618 main.go:141] libmachine: (old-k8s-version-198979) Reserved static IP address: 192.168.39.186
	I1225 13:27:18.889702 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | skip adding static IP to network mk-old-k8s-version-198979 - found existing host DHCP lease matching {name: "old-k8s-version-198979", mac: "52:54:00:a1:03:69", ip: "192.168.39.186"}
	I1225 13:27:18.889724 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Getting to WaitForSSH function...
	I1225 13:27:18.889741 1482618 main.go:141] libmachine: (old-k8s-version-198979) Waiting for SSH to be available...
	I1225 13:27:18.892133 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.892475 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.892509 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.892626 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Using SSH client type: external
	I1225 13:27:18.892658 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa (-rw-------)
	I1225 13:27:18.892688 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:27:18.892703 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | About to run SSH command:
	I1225 13:27:18.892722 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | exit 0
	I1225 13:27:18.991797 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | SSH cmd err, output: <nil>: 
	I1225 13:27:18.992203 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetConfigRaw
	I1225 13:27:18.992943 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:18.996016 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.996344 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:18.996416 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:18.996762 1482618 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/config.json ...
	I1225 13:27:18.996990 1482618 machine.go:88] provisioning docker machine ...
	I1225 13:27:18.997007 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:18.997254 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:18.997454 1482618 buildroot.go:166] provisioning hostname "old-k8s-version-198979"
	I1225 13:27:18.997483 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:18.997670 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.000725 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.001114 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.001144 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.001332 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.001504 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.001686 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.001836 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.002039 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.002592 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.002614 1482618 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-198979 && echo "old-k8s-version-198979" | sudo tee /etc/hostname
	I1225 13:27:19.148260 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-198979
	
	I1225 13:27:19.148291 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.151692 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.152160 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.152196 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.152350 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.152566 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.152743 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.152941 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.153133 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.153647 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.153678 1482618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-198979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-198979/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-198979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:27:19.294565 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:27:19.294606 1482618 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:27:19.294635 1482618 buildroot.go:174] setting up certificates
	I1225 13:27:19.294649 1482618 provision.go:83] configureAuth start
	I1225 13:27:19.294663 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetMachineName
	I1225 13:27:19.295039 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:19.298511 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.298933 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.298971 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.299137 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.302045 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.302486 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.302520 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.302682 1482618 provision.go:138] copyHostCerts
	I1225 13:27:19.302777 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:27:19.302806 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:27:19.302869 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:27:19.302994 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:27:19.303012 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:27:19.303042 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:27:19.303103 1482618 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:27:19.303113 1482618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:27:19.303131 1482618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:27:19.303177 1482618 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-198979 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube old-k8s-version-198979]
	I1225 13:27:19.444049 1482618 provision.go:172] copyRemoteCerts
	I1225 13:27:19.444142 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:27:19.444180 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.447754 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.448141 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.448174 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.448358 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.448593 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.448818 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.448994 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:19.545298 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:27:19.576678 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:27:19.604520 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1225 13:27:19.631640 1482618 provision.go:86] duration metric: configureAuth took 336.975454ms
	I1225 13:27:19.631674 1482618 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:27:19.631899 1482618 config.go:182] Loaded profile config "old-k8s-version-198979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1225 13:27:19.632012 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.635618 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.636130 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.636166 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.636644 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.636903 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.637088 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.637315 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.637511 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:19.638005 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:19.638040 1482618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:27:19.990807 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:27:19.990844 1482618 machine.go:91] provisioned docker machine in 993.840927ms
	I1225 13:27:19.990857 1482618 start.go:300] post-start starting for "old-k8s-version-198979" (driver="kvm2")
	I1225 13:27:19.990870 1482618 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:27:19.990908 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:19.991349 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:27:19.991388 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:19.994622 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.994980 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:19.995015 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:19.995147 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:19.995402 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:19.995574 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:19.995713 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.089652 1482618 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:27:20.094575 1482618 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:27:20.094611 1482618 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:27:20.094716 1482618 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:27:20.094856 1482618 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:27:20.095010 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:27:20.105582 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:20.133802 1482618 start.go:303] post-start completed in 142.928836ms
	I1225 13:27:20.133830 1482618 fix.go:56] fixHost completed within 25.200724583s
	I1225 13:27:20.133860 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.137215 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.137635 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.137670 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.137839 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.138081 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.138322 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.138518 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.138732 1482618 main.go:141] libmachine: Using SSH client type: native
	I1225 13:27:20.139194 1482618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1225 13:27:20.139228 1482618 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:27:20.268572 1482618 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703510840.203941272
	
	I1225 13:27:20.268602 1482618 fix.go:206] guest clock: 1703510840.203941272
	I1225 13:27:20.268613 1482618 fix.go:219] Guest: 2023-12-25 13:27:20.203941272 +0000 UTC Remote: 2023-12-25 13:27:20.133835417 +0000 UTC m=+384.781536006 (delta=70.105855ms)
	I1225 13:27:20.268641 1482618 fix.go:190] guest clock delta is within tolerance: 70.105855ms
	I1225 13:27:20.268651 1482618 start.go:83] releasing machines lock for "old-k8s-version-198979", held for 25.335582747s
	I1225 13:27:20.268683 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.268981 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:20.272181 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.272626 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.272666 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.272948 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273612 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273851 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:20.273925 1482618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:27:20.273990 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.274108 1482618 ssh_runner.go:195] Run: cat /version.json
	I1225 13:27:20.274133 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:20.277090 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277381 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277568 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.277608 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.277839 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.278041 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.278066 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:20.278085 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:20.278284 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:20.278293 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.278500 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.278516 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:20.278691 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:20.278852 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:20.395858 1482618 ssh_runner.go:195] Run: systemctl --version
	I1225 13:27:20.403417 1482618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:27:17.629846 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:19.635250 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:20.559485 1482618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:27:20.566356 1482618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:27:20.566487 1482618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:27:20.584531 1482618 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:27:20.584565 1482618 start.go:475] detecting cgroup driver to use...
	I1225 13:27:20.584648 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:27:20.599889 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:27:20.613197 1482618 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:27:20.613278 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:27:20.626972 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:27:20.640990 1482618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:27:20.752941 1482618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:27:20.886880 1482618 docker.go:219] disabling docker service ...
	I1225 13:27:20.886971 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:27:20.903143 1482618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:27:20.919083 1482618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:27:21.042116 1482618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:27:21.171997 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:27:21.185237 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:27:21.204711 1482618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1225 13:27:21.204787 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.215196 1482618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:27:21.215276 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.226411 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.239885 1482618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:27:21.250576 1482618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:27:21.263723 1482618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:27:21.274356 1482618 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:27:21.274462 1482618 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:27:21.288126 1482618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:27:21.300772 1482618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:27:21.467651 1482618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:27:21.700509 1482618 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:27:21.700618 1482618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:27:21.708118 1482618 start.go:543] Will wait 60s for crictl version
	I1225 13:27:21.708207 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:21.712687 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:27:21.768465 1482618 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:27:21.768563 1482618 ssh_runner.go:195] Run: crio --version
	I1225 13:27:21.836834 1482618 ssh_runner.go:195] Run: crio --version
	I1225 13:27:21.907627 1482618 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1225 13:27:21.288635 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:21.288669 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:21.288685 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:21.374966 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:21.375010 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:21.760268 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:21.771864 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:21.771898 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:22.259417 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:22.271720 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:22.271779 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:22.760217 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:22.767295 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 13:27:22.767333 1484104 api_server.go:103] status: https://192.168.61.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 13:27:23.259377 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:27:23.265348 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 200:
	ok
	I1225 13:27:23.275974 1484104 api_server.go:141] control plane version: v1.28.4
	I1225 13:27:23.276010 1484104 api_server.go:131] duration metric: took 5.01669783s to wait for apiserver health ...
	I1225 13:27:23.276024 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:27:23.276033 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:23.278354 1484104 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:23.279804 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:23.300762 1484104 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:23.326548 1484104 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:23.346826 1484104 system_pods.go:59] 8 kube-system pods found
	I1225 13:27:23.346871 1484104 system_pods.go:61] "coredns-5dd5756b68-l7qnn" [860c88a5-5bb9-4556-814a-08f1cc882c0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:27:23.346884 1484104 system_pods.go:61] "etcd-default-k8s-diff-port-344803" [eca3b322-fbba-4d8e-b8be-10b7f552bd32] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 13:27:23.346896 1484104 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-344803" [730b8b80-bf80-4769-b4cd-7e81b0600599] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 13:27:23.346908 1484104 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-344803" [8424df4f-e2d8-4f22-8593-21cf0ccc82eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 13:27:23.346965 1484104 system_pods.go:61] "kube-proxy-wnjn2" [ed9e8d7e-d237-46ab-84d1-a78f7f931aab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:27:23.346988 1484104 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-344803" [f865e5a4-4b21-4d15-a437-47965f0d1db8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 13:27:23.347009 1484104 system_pods.go:61] "metrics-server-57f55c9bc5-zgrj5" [d52789c5-dfe7-48e6-9dfd-a7dc5b5be6ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:27:23.347099 1484104 system_pods.go:61] "storage-provisioner" [96723fff-956b-42c4-864b-b18afb0c0285] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 13:27:23.347116 1484104 system_pods.go:74] duration metric: took 20.540773ms to wait for pod list to return data ...
	I1225 13:27:23.347135 1484104 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:23.358619 1484104 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:23.358673 1484104 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:23.358690 1484104 node_conditions.go:105] duration metric: took 11.539548ms to run NodePressure ...
	I1225 13:27:23.358716 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:23.795558 1484104 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:23.804103 1484104 kubeadm.go:787] kubelet initialised
	I1225 13:27:23.804125 1484104 kubeadm.go:788] duration metric: took 8.535185ms waiting for restarted kubelet to initialise ...
	I1225 13:27:23.804133 1484104 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:23.814199 1484104 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:20.557056 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:22.569215 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:25.054111 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:21.909021 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetIP
	I1225 13:27:21.912423 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:21.912802 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:21.912828 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:21.913199 1482618 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 13:27:21.917615 1482618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:21.931709 1482618 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1225 13:27:21.931830 1482618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:21.991133 1482618 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1225 13:27:21.991246 1482618 ssh_runner.go:195] Run: which lz4
	I1225 13:27:21.997721 1482618 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:27:22.003171 1482618 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:27:22.003218 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1225 13:27:23.975639 1482618 crio.go:444] Took 1.977982 seconds to copy over tarball
	I1225 13:27:23.975723 1482618 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:27:21.643721 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:24.132742 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:25.827617 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:28.322507 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:27.055526 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:29.558580 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:27.243294 1482618 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.267535049s)
	I1225 13:27:27.243339 1482618 crio.go:451] Took 3.267670 seconds to extract the tarball
	I1225 13:27:27.243368 1482618 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:27:27.285528 1482618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:27:27.338914 1482618 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1225 13:27:27.338948 1482618 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1225 13:27:27.339078 1482618 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.339115 1482618 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.339118 1482618 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.339160 1482618 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.339114 1482618 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.339054 1482618 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.339059 1482618 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1225 13:27:27.339060 1482618 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.340631 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.340647 1482618 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.340658 1482618 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.340632 1482618 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1225 13:27:27.340630 1482618 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.340666 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.340630 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.340635 1482618 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.502560 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.502567 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.510502 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1225 13:27:27.513052 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.518668 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.522882 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.553027 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.608178 1482618 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1225 13:27:27.608235 1482618 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.608294 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.655271 1482618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:27.671173 1482618 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1225 13:27:27.671223 1482618 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1225 13:27:27.671283 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.671290 1482618 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1225 13:27:27.671330 1482618 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.671378 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.728043 1482618 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1225 13:27:27.728102 1482618 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.728139 1482618 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1225 13:27:27.728159 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.728187 1482618 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.728222 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.739034 1482618 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1225 13:27:27.739077 1482618 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:27.739133 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.739156 1482618 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1225 13:27:27.739205 1482618 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.739213 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1225 13:27:27.739261 1482618 ssh_runner.go:195] Run: which crictl
	I1225 13:27:27.858062 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1225 13:27:27.858089 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1225 13:27:27.858143 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1225 13:27:27.858175 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1225 13:27:27.858237 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1225 13:27:27.858301 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1225 13:27:27.858358 1482618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1225 13:27:28.004051 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1225 13:27:28.004125 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1225 13:27:28.004183 1482618 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1225 13:27:28.004226 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1225 13:27:28.004304 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1225 13:27:28.004369 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1225 13:27:28.005012 1482618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1225 13:27:28.009472 1482618 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1225 13:27:28.009491 1482618 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1225 13:27:28.009550 1482618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1225 13:27:29.560553 1482618 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.550970125s)
	I1225 13:27:29.560586 1482618 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1225 13:27:29.560668 1482618 cache_images.go:92] LoadImages completed in 2.22170407s
	W1225 13:27:29.560766 1482618 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1225 13:27:29.560846 1482618 ssh_runner.go:195] Run: crio config
	I1225 13:27:29.639267 1482618 cni.go:84] Creating CNI manager for ""
	I1225 13:27:29.639298 1482618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:29.639324 1482618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1225 13:27:29.639375 1482618 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-198979 NodeName:old-k8s-version-198979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1225 13:27:29.639598 1482618 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-198979"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-198979
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.186:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:27:29.639711 1482618 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-198979 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-198979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:27:29.639800 1482618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1225 13:27:29.649536 1482618 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:27:29.649614 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:27:29.658251 1482618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1225 13:27:29.678532 1482618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 13:27:29.698314 1482618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1225 13:27:29.718873 1482618 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I1225 13:27:29.723656 1482618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:27:29.737736 1482618 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979 for IP: 192.168.39.186
	I1225 13:27:29.737787 1482618 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:29.738006 1482618 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:27:29.738069 1482618 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:27:29.738147 1482618 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.key
	I1225 13:27:29.738211 1482618 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.key.d0691019
	I1225 13:27:29.738252 1482618 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.key
	I1225 13:27:29.738456 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:27:29.738501 1482618 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:27:29.738511 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:27:29.738543 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:27:29.738578 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:27:29.738617 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:27:29.738682 1482618 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:27:29.739444 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:27:29.765303 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:27:29.790702 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:27:29.818835 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 13:27:29.845659 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:27:29.872043 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:27:29.902732 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:27:29.928410 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:27:29.954350 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:27:29.978557 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:27:30.007243 1482618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:27:30.036876 1482618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:27:30.055990 1482618 ssh_runner.go:195] Run: openssl version
	I1225 13:27:30.062813 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:27:30.075937 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.082034 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.082145 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:27:30.089645 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:27:30.102657 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:27:30.115701 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.120635 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.120711 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:27:30.128051 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:27:30.139465 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:27:30.151046 1482618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.156574 1482618 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.156656 1482618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:27:30.162736 1482618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:27:30.174356 1482618 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:27:30.180962 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 13:27:30.187746 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 13:27:30.194481 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 13:27:30.202279 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 13:27:30.210555 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 13:27:30.218734 1482618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 13:27:30.225325 1482618 kubeadm.go:404] StartCluster: {Name:old-k8s-version-198979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-198979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:27:30.225424 1482618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:27:30.225478 1482618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:30.274739 1482618 cri.go:89] found id: ""
	I1225 13:27:30.274842 1482618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:27:30.285949 1482618 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1225 13:27:30.285980 1482618 kubeadm.go:636] restartCluster start
	I1225 13:27:30.286051 1482618 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 13:27:30.295521 1482618 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:30.296804 1482618 kubeconfig.go:92] found "old-k8s-version-198979" server: "https://192.168.39.186:8443"
	I1225 13:27:30.299493 1482618 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 13:27:30.308641 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:30.308745 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:30.320654 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:26.631365 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:29.129943 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:31.131590 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:30.329682 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:31.824743 1484104 pod_ready.go:92] pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:31.824770 1484104 pod_ready.go:81] duration metric: took 8.010540801s waiting for pod "coredns-5dd5756b68-l7qnn" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.824781 1484104 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.830321 1484104 pod_ready.go:92] pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:31.830347 1484104 pod_ready.go:81] duration metric: took 5.559816ms waiting for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:31.830358 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.338865 1484104 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:32.338898 1484104 pod_ready.go:81] duration metric: took 508.532498ms waiting for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.338913 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.846030 1484104 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:33.846054 1484104 pod_ready.go:81] duration metric: took 1.507133449s waiting for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.846065 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wnjn2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.851826 1484104 pod_ready.go:92] pod "kube-proxy-wnjn2" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:33.851846 1484104 pod_ready.go:81] duration metric: took 5.775207ms waiting for pod "kube-proxy-wnjn2" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:33.851855 1484104 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:32.054359 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:34.054586 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:30.809359 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:30.809482 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:30.821194 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:31.308690 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:31.308830 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:31.322775 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:31.809511 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:31.809612 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:31.823928 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:32.309450 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:32.309569 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:32.320937 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:32.809587 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:32.809686 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:32.822957 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.308905 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:33.308992 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:33.321195 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.808702 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:33.808803 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:33.820073 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:34.309661 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:34.309760 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:34.322931 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:34.809599 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:34.809724 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:34.825650 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:35.308697 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:35.308798 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:35.321313 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:33.630973 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.128884 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:35.859839 1484104 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.359809 1484104 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:27:36.359838 1484104 pod_ready.go:81] duration metric: took 2.507975576s waiting for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:36.359853 1484104 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:38.371707 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:36.554699 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:39.053732 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:35.809083 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:35.809186 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:35.821434 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:36.309100 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:36.309181 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:36.322566 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:36.809026 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:36.809136 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:36.820791 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:37.309382 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:37.309501 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:37.321365 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:37.809397 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:37.809515 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:37.821538 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:38.309716 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:38.309819 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:38.321060 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:38.809627 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:38.809728 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:38.821784 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:39.309363 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:39.309483 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:39.320881 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:39.809420 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:39.809597 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:39.820752 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:40.308911 1482618 api_server.go:166] Checking apiserver status ...
	I1225 13:27:40.309009 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1225 13:27:40.322568 1482618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1225 13:27:40.322614 1482618 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1225 13:27:40.322653 1482618 kubeadm.go:1135] stopping kube-system containers ...
	I1225 13:27:40.322670 1482618 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1225 13:27:40.322730 1482618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:27:40.366271 1482618 cri.go:89] found id: ""
	I1225 13:27:40.366365 1482618 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 13:27:40.383123 1482618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:27:40.392329 1482618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:27:40.392412 1482618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:40.401435 1482618 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1225 13:27:40.401471 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:38.131920 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.629516 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.868311 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:42.872952 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:41.054026 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:43.054332 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:40.538996 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.466467 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.697265 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.796796 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:41.898179 1482618 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:27:41.898290 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:42.398616 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:42.899373 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.399246 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.898788 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:27:43.923617 1482618 api_server.go:72] duration metric: took 2.025431683s to wait for apiserver process to appear ...
	I1225 13:27:43.923650 1482618 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:27:43.923684 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:42.632296 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.128501 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.368613 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:47.868011 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:45.054778 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:47.559938 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:48.924695 1482618 api_server.go:269] stopped: https://192.168.39.186:8443/healthz: Get "https://192.168.39.186:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 13:27:48.924755 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:49.954284 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:49.954379 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:49.954401 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:49.985515 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1225 13:27:49.985568 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1225 13:27:50.424616 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:50.431560 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1225 13:27:50.431604 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1225 13:27:50.924173 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:50.935578 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1225 13:27:50.935622 1482618 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1225 13:27:51.424341 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:27:51.431709 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1225 13:27:51.440822 1482618 api_server.go:141] control plane version: v1.16.0
	I1225 13:27:51.440855 1482618 api_server.go:131] duration metric: took 7.517198191s to wait for apiserver health ...
	I1225 13:27:51.440866 1482618 cni.go:84] Creating CNI manager for ""
	I1225 13:27:51.440873 1482618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:27:51.442446 1482618 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:27:47.130936 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:49.132275 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:51.443830 1482618 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:27:51.456628 1482618 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:27:51.477822 1482618 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:27:51.487046 1482618 system_pods.go:59] 7 kube-system pods found
	I1225 13:27:51.487082 1482618 system_pods.go:61] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:27:51.487087 1482618 system_pods.go:61] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:27:51.487091 1482618 system_pods.go:61] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:27:51.487096 1482618 system_pods.go:61] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Pending
	I1225 13:27:51.487100 1482618 system_pods.go:61] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:27:51.487103 1482618 system_pods.go:61] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:27:51.487107 1482618 system_pods.go:61] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:27:51.487113 1482618 system_pods.go:74] duration metric: took 9.266811ms to wait for pod list to return data ...
	I1225 13:27:51.487120 1482618 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:27:51.491782 1482618 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:27:51.491817 1482618 node_conditions.go:123] node cpu capacity is 2
	I1225 13:27:51.491831 1482618 node_conditions.go:105] duration metric: took 4.70597ms to run NodePressure ...
	I1225 13:27:51.491855 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 13:27:51.768658 1482618 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1225 13:27:51.776258 1482618 kubeadm.go:787] kubelet initialised
	I1225 13:27:51.776283 1482618 kubeadm.go:788] duration metric: took 7.588357ms waiting for restarted kubelet to initialise ...
	I1225 13:27:51.776293 1482618 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:51.784053 1482618 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.791273 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.791314 1482618 pod_ready.go:81] duration metric: took 7.223677ms waiting for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.791328 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.791338 1482618 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.801453 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "etcd-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.801491 1482618 pod_ready.go:81] duration metric: took 10.138221ms waiting for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.801505 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "etcd-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.801514 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.809536 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.809577 1482618 pod_ready.go:81] duration metric: took 8.051285ms waiting for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.809590 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.809608 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:51.882231 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.882268 1482618 pod_ready.go:81] duration metric: took 72.643349ms waiting for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:51.882299 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:51.882309 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:52.282486 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-proxy-vw9lf" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.282531 1482618 pod_ready.go:81] duration metric: took 400.208562ms waiting for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:52.282543 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-proxy-vw9lf" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.282552 1482618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:52.689279 1482618 pod_ready.go:97] node "old-k8s-version-198979" hosting pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.689329 1482618 pod_ready.go:81] duration metric: took 406.764819ms waiting for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	E1225 13:27:52.689343 1482618 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-198979" hosting pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:52.689353 1482618 pod_ready.go:38] duration metric: took 913.049281ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:27:52.689387 1482618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:27:52.705601 1482618 ops.go:34] apiserver oom_adj: -16
	I1225 13:27:52.705628 1482618 kubeadm.go:640] restartCluster took 22.419638621s
	I1225 13:27:52.705639 1482618 kubeadm.go:406] StartCluster complete in 22.480335985s
	I1225 13:27:52.705663 1482618 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:52.705760 1482618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:27:52.708825 1482618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:27:52.709185 1482618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:27:52.709313 1482618 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:27:52.709404 1482618 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709427 1482618 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-198979"
	W1225 13:27:52.709435 1482618 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:27:52.709443 1482618 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709460 1482618 config.go:182] Loaded profile config "old-k8s-version-198979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1225 13:27:52.709466 1482618 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-198979"
	I1225 13:27:52.709475 1482618 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-198979"
	I1225 13:27:52.709482 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.709488 1482618 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-198979"
	W1225 13:27:52.709502 1482618 addons.go:246] addon metrics-server should already be in state true
	I1225 13:27:52.709553 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.709914 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.709953 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.709964 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.709992 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.709965 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.710046 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.729360 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1225 13:27:52.730016 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.730343 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I1225 13:27:52.730527 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I1225 13:27:52.730777 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.730808 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.730852 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.731329 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.731365 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.731381 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.731589 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.731638 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.731715 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.732311 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.732360 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.732731 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.732763 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.733225 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.733787 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.733859 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.735675 1482618 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-198979"
	W1225 13:27:52.735694 1482618 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:27:52.735725 1482618 host.go:66] Checking if "old-k8s-version-198979" exists ...
	I1225 13:27:52.736079 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.736117 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.751072 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40177
	I1225 13:27:52.752097 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.753002 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.753022 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.753502 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.753741 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.756158 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.758410 1482618 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:27:52.758080 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42869
	I1225 13:27:52.759927 1482618 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:52.759942 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:27:52.759963 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.760521 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.761648 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.761665 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.762046 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.762823 1482618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:27:52.762872 1482618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:27:52.763974 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.764712 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.764748 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.764752 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.765009 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.765216 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I1225 13:27:52.765216 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.765461 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.791493 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.792265 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.792294 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.792795 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.793023 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.795238 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.799536 1482618 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:27:52.800892 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:27:52.800920 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:27:52.800955 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.804762 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.806571 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.806568 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.806606 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.806957 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.807115 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.807260 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.811419 1482618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I1225 13:27:52.811816 1482618 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:27:52.812352 1482618 main.go:141] libmachine: Using API Version  1
	I1225 13:27:52.812379 1482618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:27:52.812872 1482618 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:27:52.813083 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetState
	I1225 13:27:52.814823 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .DriverName
	I1225 13:27:52.815122 1482618 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:52.815138 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:27:52.815158 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHHostname
	I1225 13:27:52.818411 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.818892 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:03:69", ip: ""} in network mk-old-k8s-version-198979: {Iface:virbr4 ExpiryTime:2023-12-25 14:27:09 +0000 UTC Type:0 Mac:52:54:00:a1:03:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:old-k8s-version-198979 Clientid:01:52:54:00:a1:03:69}
	I1225 13:27:52.818926 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | domain old-k8s-version-198979 has defined IP address 192.168.39.186 and MAC address 52:54:00:a1:03:69 in network mk-old-k8s-version-198979
	I1225 13:27:52.819253 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHPort
	I1225 13:27:52.819504 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHKeyPath
	I1225 13:27:52.819705 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .GetSSHUsername
	I1225 13:27:52.819981 1482618 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/old-k8s-version-198979/id_rsa Username:docker}
	I1225 13:27:52.963144 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:27:52.974697 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:27:52.974733 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:27:53.021391 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:27:53.039959 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:27:53.039991 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:27:53.121390 1482618 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:53.121421 1482618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:27:53.196232 1482618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:27:53.256419 1482618 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-198979" context rescaled to 1 replicas
	I1225 13:27:53.256479 1482618 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:27:53.258366 1482618 out.go:177] * Verifying Kubernetes components...
	I1225 13:27:53.259807 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:27:53.276151 1482618 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1225 13:27:53.687341 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.687374 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.687666 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.687690 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.687701 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.687710 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.689261 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.689286 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.689294 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.725954 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.725985 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.726715 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.726737 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.726743 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.726776 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.726787 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.727040 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.727054 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.727061 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.744318 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.744356 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.744696 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.744745 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.846817 1482618 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-198979" to be "Ready" ...
	I1225 13:27:53.846878 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.846899 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.847234 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.847301 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.847317 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.847329 1482618 main.go:141] libmachine: Making call to close driver server
	I1225 13:27:53.847351 1482618 main.go:141] libmachine: (old-k8s-version-198979) Calling .Close
	I1225 13:27:53.847728 1482618 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:27:53.847767 1482618 main.go:141] libmachine: (old-k8s-version-198979) DBG | Closing plugin on server side
	I1225 13:27:53.847793 1482618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:27:53.847810 1482618 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-198979"
	I1225 13:27:53.850107 1482618 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:27:49.870506 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:52.369916 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:50.056130 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:52.562555 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:53.851456 1482618 addons.go:508] enable addons completed in 1.14214354s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:27:51.635205 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:54.131852 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:54.868902 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:57.367267 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:59.368997 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:55.057522 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:57.555214 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:55.851206 1482618 node_ready.go:58] node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:27:58.350906 1482618 node_ready.go:58] node "old-k8s-version-198979" has status "Ready":"False"
	I1225 13:28:00.350892 1482618 node_ready.go:49] node "old-k8s-version-198979" has status "Ready":"True"
	I1225 13:28:00.350918 1482618 node_ready.go:38] duration metric: took 6.504066205s waiting for node "old-k8s-version-198979" to be "Ready" ...
	I1225 13:28:00.350928 1482618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:28:00.355882 1482618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.362249 1482618 pod_ready.go:92] pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.362281 1482618 pod_ready.go:81] duration metric: took 6.362168ms waiting for pod "coredns-5644d7b6d9-mk9jx" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.362290 1482618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.367738 1482618 pod_ready.go:92] pod "etcd-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.367777 1482618 pod_ready.go:81] duration metric: took 5.478984ms waiting for pod "etcd-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.367790 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.373724 1482618 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.373754 1482618 pod_ready.go:81] duration metric: took 5.95479ms waiting for pod "kube-apiserver-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.373774 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.380810 1482618 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.380841 1482618 pod_ready.go:81] duration metric: took 7.058206ms waiting for pod "kube-controller-manager-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.380854 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:27:56.635216 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:27:59.129464 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:01.132131 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:00.750612 1482618 pod_ready.go:92] pod "kube-proxy-vw9lf" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:00.750641 1482618 pod_ready.go:81] duration metric: took 369.779347ms waiting for pod "kube-proxy-vw9lf" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:00.750651 1482618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:01.151567 1482618 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace has status "Ready":"True"
	I1225 13:28:01.151596 1482618 pod_ready.go:81] duration metric: took 400.937167ms waiting for pod "kube-scheduler-old-k8s-version-198979" in "kube-system" namespace to be "Ready" ...
	I1225 13:28:01.151617 1482618 pod_ready.go:38] duration metric: took 800.677743ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:28:01.151634 1482618 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:28:01.151694 1482618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:28:01.170319 1482618 api_server.go:72] duration metric: took 7.913795186s to wait for apiserver process to appear ...
	I1225 13:28:01.170349 1482618 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:28:01.170368 1482618 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1225 13:28:01.177133 1482618 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1225 13:28:01.178326 1482618 api_server.go:141] control plane version: v1.16.0
	I1225 13:28:01.178351 1482618 api_server.go:131] duration metric: took 7.994163ms to wait for apiserver health ...
	I1225 13:28:01.178361 1482618 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:28:01.352663 1482618 system_pods.go:59] 7 kube-system pods found
	I1225 13:28:01.352693 1482618 system_pods.go:61] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:28:01.352697 1482618 system_pods.go:61] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:28:01.352702 1482618 system_pods.go:61] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:28:01.352706 1482618 system_pods.go:61] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Running
	I1225 13:28:01.352710 1482618 system_pods.go:61] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:28:01.352714 1482618 system_pods.go:61] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:28:01.352718 1482618 system_pods.go:61] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:28:01.352724 1482618 system_pods.go:74] duration metric: took 174.35745ms to wait for pod list to return data ...
	I1225 13:28:01.352731 1482618 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:28:01.554095 1482618 default_sa.go:45] found service account: "default"
	I1225 13:28:01.554129 1482618 default_sa.go:55] duration metric: took 201.391529ms for default service account to be created ...
	I1225 13:28:01.554139 1482618 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:28:01.757666 1482618 system_pods.go:86] 7 kube-system pods found
	I1225 13:28:01.757712 1482618 system_pods.go:89] "coredns-5644d7b6d9-mk9jx" [7487388f-a7b7-401e-9ce3-06fac16ddd47] Running
	I1225 13:28:01.757724 1482618 system_pods.go:89] "etcd-old-k8s-version-198979" [5d65ba8a-44fa-493c-a4c3-a77746f7dcb4] Running
	I1225 13:28:01.757731 1482618 system_pods.go:89] "kube-apiserver-old-k8s-version-198979" [44311c5c-5f2f-4689-8491-a342d11269f0] Running
	I1225 13:28:01.757747 1482618 system_pods.go:89] "kube-controller-manager-old-k8s-version-198979" [adc5dfe5-8eea-4201-8210-9e7dda6253ef] Running
	I1225 13:28:01.757754 1482618 system_pods.go:89] "kube-proxy-vw9lf" [2b7377f2-3ae6-4003-977d-4eb3c7cd11f0] Running
	I1225 13:28:01.757763 1482618 system_pods.go:89] "kube-scheduler-old-k8s-version-198979" [5600c679-92a4-4520-88bc-291a6912a8ed] Running
	I1225 13:28:01.757769 1482618 system_pods.go:89] "storage-provisioner" [0d6c87f1-93ae-479b-ac0e-4623e326afb6] Running
	I1225 13:28:01.757785 1482618 system_pods.go:126] duration metric: took 203.63938ms to wait for k8s-apps to be running ...
	I1225 13:28:01.757800 1482618 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:28:01.757863 1482618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:28:01.771792 1482618 system_svc.go:56] duration metric: took 13.980705ms WaitForService to wait for kubelet.
	I1225 13:28:01.771821 1482618 kubeadm.go:581] duration metric: took 8.515309843s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:28:01.771843 1482618 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:28:01.952426 1482618 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:28:01.952463 1482618 node_conditions.go:123] node cpu capacity is 2
	I1225 13:28:01.952477 1482618 node_conditions.go:105] duration metric: took 180.629128ms to run NodePressure ...
	I1225 13:28:01.952493 1482618 start.go:228] waiting for startup goroutines ...
	I1225 13:28:01.952500 1482618 start.go:233] waiting for cluster config update ...
	I1225 13:28:01.952512 1482618 start.go:242] writing updated cluster config ...
	I1225 13:28:01.952974 1482618 ssh_runner.go:195] Run: rm -f paused
	I1225 13:28:02.007549 1482618 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I1225 13:28:02.009559 1482618 out.go:177] 
	W1225 13:28:02.011242 1482618 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I1225 13:28:02.012738 1482618 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1225 13:28:02.014029 1482618 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-198979" cluster and "default" namespace by default
	I1225 13:28:01.869370 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:04.368824 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:00.055713 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:02.553981 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:04.554824 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:03.629358 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:06.130616 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:06.869993 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:09.367869 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:07.054835 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:09.554904 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:08.130786 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:10.632435 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:11.368789 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:13.867665 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:12.054007 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:14.554676 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:13.129854 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:15.628997 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:15.869048 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:18.368070 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:16.557633 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:19.054486 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:17.629072 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:20.129902 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:20.868173 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:22.868637 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:21.555027 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:24.054858 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:22.133148 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:24.630133 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:25.369437 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:27.870029 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:26.056198 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:28.555876 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:27.129583 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:29.629963 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:30.367773 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:32.368497 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:34.369791 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:31.053212 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:33.054315 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:32.128310 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:34.130650 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:36.869325 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:39.367488 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:35.056761 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:37.554917 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:36.632857 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:39.129518 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:41.368425 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:43.868157 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:40.054854 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:42.555015 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:45.053900 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:41.630558 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:44.132072 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:46.366422 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:48.368331 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:47.056378 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:49.555186 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:46.629415 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:49.129249 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:51.129692 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:50.868321 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:53.366805 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:52.053785 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:54.057533 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:53.629427 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:55.629652 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:55.368197 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:57.867659 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.868187 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:56.556558 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.055474 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:57.629912 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:28:59.630858 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:01.868360 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:03.870936 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:01.555132 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:04.053887 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:02.127901 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:04.131186 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.367634 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:08.867571 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.054546 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:08.554559 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:06.629995 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:09.129898 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:10.868677 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:12.868979 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:11.055554 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:13.554637 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:11.629511 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:14.129806 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:14.872549 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:17.371705 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:19.868438 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:16.054016 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:18.055476 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:16.629688 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:18.630125 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:21.132102 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:22.367525 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:24.369464 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:20.554660 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:22.556044 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:25.054213 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:23.630061 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:26.132281 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:26.868977 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:29.367384 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:27.055844 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:29.554124 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:28.630474 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:30.631070 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:31.367691 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:33.867941 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:31.555167 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:33.557066 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:32.634599 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:35.131402 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:36.369081 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:38.868497 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:36.054764 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:38.054975 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:37.629895 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:39.630456 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:41.366745 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:43.367883 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:40.554998 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:42.555257 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:42.130638 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:44.629851 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:45.371692 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:47.866965 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:49.868100 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:45.057506 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:47.555247 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:46.632874 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:49.129782 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:51.130176 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:51.868818 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:53.868968 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:50.055939 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:52.556609 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:55.054048 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:53.132556 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:55.632608 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:56.368065 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:58.868076 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:57.054224 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:59.554940 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:29:58.128545 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:00.129437 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:00.868364 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:03.368093 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:02.054215 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:04.056019 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:02.129706 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:04.130092 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:05.867992 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:07.872121 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:06.554889 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:09.056197 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:06.630974 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:08.632171 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:11.128952 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:10.367536 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:12.369331 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:11.554738 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:13.555681 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:13.129878 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:15.130470 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:14.868630 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:17.367768 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:19.368295 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:16.054391 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:18.054606 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:17.630479 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:19.630971 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:21.873194 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:24.368931 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:20.054866 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:22.554974 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:25.053696 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:22.130831 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:24.630755 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:26.867555 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:28.868612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:27.054706 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:29.055614 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:27.133840 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:29.630572 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:30.868716 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:33.369710 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:31.554882 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:33.556367 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:32.129865 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:34.129987 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:35.870671 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:38.367237 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:35.557755 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:37.559481 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:36.630513 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:39.130271 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:40.368072 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:42.869043 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:40.055427 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:42.554787 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:45.053876 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:41.629178 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:43.630237 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:45.631199 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:44.873439 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:47.367548 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:49.368066 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:47.555106 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:49.556132 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:48.130206 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:50.629041 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:51.369311 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:53.870853 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:52.055511 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:54.061135 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:52.630215 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:55.130153 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:55.873755 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:58.367682 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:56.554861 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:59.054344 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:57.629571 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:30:59.630560 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:00.372506 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:02.867084 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:01.554332 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:03.554717 1483118 pod_ready.go:102] pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.555955 1483118 pod_ready.go:81] duration metric: took 4m0.009196678s waiting for pod "metrics-server-57f55c9bc5-q97kl" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:04.555987 1483118 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:31:04.555994 1483118 pod_ready.go:38] duration metric: took 4m2.890580557s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:04.556014 1483118 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:31:04.556050 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:04.556152 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:04.615717 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:04.615748 1483118 cri.go:89] found id: ""
	I1225 13:31:04.615759 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:04.615830 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.621669 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:04.621778 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:04.661088 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:04.661127 1483118 cri.go:89] found id: ""
	I1225 13:31:04.661139 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:04.661191 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.666410 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:04.666496 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:04.710927 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:04.710962 1483118 cri.go:89] found id: ""
	I1225 13:31:04.710973 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:04.711041 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.715505 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:04.715587 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:04.761494 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:04.761518 1483118 cri.go:89] found id: ""
	I1225 13:31:04.761527 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:04.761580 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.766925 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:04.767015 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:04.810640 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:04.810670 1483118 cri.go:89] found id: ""
	I1225 13:31:04.810685 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:04.810753 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.815190 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:04.815285 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:04.858275 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:04.858301 1483118 cri.go:89] found id: ""
	I1225 13:31:04.858309 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:04.858362 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.863435 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:04.863529 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:04.914544 1483118 cri.go:89] found id: ""
	I1225 13:31:04.914583 1483118 logs.go:284] 0 containers: []
	W1225 13:31:04.914594 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:04.914603 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:04.914675 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:04.969548 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:04.969577 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:04.969584 1483118 cri.go:89] found id: ""
	I1225 13:31:04.969594 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:04.969660 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.974172 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:04.978956 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:04.978989 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:05.033590 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:05.033632 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:02.133447 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.630226 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:04.869025 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:07.368392 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:09.369061 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:05.085851 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:05.085879 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:05.144002 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:05.144047 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:05.191669 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:05.191703 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:05.238581 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:05.238617 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:05.253236 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:05.253271 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:05.293626 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:05.293674 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:05.338584 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:05.338622 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:05.381135 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:05.381172 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:05.886860 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:05.886918 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:06.045040 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:06.045080 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:06.101152 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:06.101192 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:08.662518 1483118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:31:08.678649 1483118 api_server.go:72] duration metric: took 4m14.820531999s to wait for apiserver process to appear ...
	I1225 13:31:08.678687 1483118 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:31:08.678729 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:08.678791 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:08.718202 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:08.718246 1483118 cri.go:89] found id: ""
	I1225 13:31:08.718255 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:08.718305 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.723089 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:08.723177 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:08.772619 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:08.772641 1483118 cri.go:89] found id: ""
	I1225 13:31:08.772649 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:08.772709 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.777577 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:08.777669 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:08.818869 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:08.818900 1483118 cri.go:89] found id: ""
	I1225 13:31:08.818910 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:08.818970 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.823301 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:08.823382 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:08.868885 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:08.868913 1483118 cri.go:89] found id: ""
	I1225 13:31:08.868924 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:08.868982 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.873489 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:08.873562 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:08.916925 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:08.916957 1483118 cri.go:89] found id: ""
	I1225 13:31:08.916967 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:08.917065 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.921808 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:08.921901 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:08.961586 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:08.961617 1483118 cri.go:89] found id: ""
	I1225 13:31:08.961628 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:08.961707 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:08.965986 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:08.966096 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:09.012223 1483118 cri.go:89] found id: ""
	I1225 13:31:09.012262 1483118 logs.go:284] 0 containers: []
	W1225 13:31:09.012270 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:09.012278 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:09.012343 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:09.060646 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:09.060675 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:09.060683 1483118 cri.go:89] found id: ""
	I1225 13:31:09.060694 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:09.060767 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:09.065955 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:09.070859 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:09.070890 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:09.128056 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:09.128096 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:09.179304 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:09.179341 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:09.194019 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:09.194048 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:09.339697 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:09.339743 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:09.389626 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:09.389669 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:09.831437 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:09.831498 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:09.888799 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:09.888848 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:09.932201 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:09.932232 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:09.983201 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:09.983242 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:10.039094 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:10.039149 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:06.630567 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:09.130605 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:11.369445 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:13.870404 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:10.095628 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:10.095677 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:10.139678 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:10.139717 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:12.688297 1483118 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I1225 13:31:12.693469 1483118 api_server.go:279] https://192.168.72.232:8443/healthz returned 200:
	ok
	I1225 13:31:12.694766 1483118 api_server.go:141] control plane version: v1.29.0-rc.2
	I1225 13:31:12.694788 1483118 api_server.go:131] duration metric: took 4.016094906s to wait for apiserver health ...
	I1225 13:31:12.694796 1483118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:31:12.694821 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:12.694876 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:12.743143 1483118 cri.go:89] found id: "ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:12.743174 1483118 cri.go:89] found id: ""
	I1225 13:31:12.743185 1483118 logs.go:284] 1 containers: [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f]
	I1225 13:31:12.743238 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.747708 1483118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:12.747803 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:12.800511 1483118 cri.go:89] found id: "6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:12.800540 1483118 cri.go:89] found id: ""
	I1225 13:31:12.800549 1483118 logs.go:284] 1 containers: [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0]
	I1225 13:31:12.800612 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.805236 1483118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:12.805308 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:12.850047 1483118 cri.go:89] found id: "7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:12.850081 1483118 cri.go:89] found id: ""
	I1225 13:31:12.850092 1483118 logs.go:284] 1 containers: [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e]
	I1225 13:31:12.850152 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.854516 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:12.854602 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:12.902131 1483118 cri.go:89] found id: "3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:12.902162 1483118 cri.go:89] found id: ""
	I1225 13:31:12.902173 1483118 logs.go:284] 1 containers: [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83]
	I1225 13:31:12.902239 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.907546 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:12.907634 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:12.966561 1483118 cri.go:89] found id: "b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:12.966590 1483118 cri.go:89] found id: ""
	I1225 13:31:12.966601 1483118 logs.go:284] 1 containers: [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36]
	I1225 13:31:12.966674 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:12.971071 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:12.971161 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:13.026823 1483118 cri.go:89] found id: "ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:13.026851 1483118 cri.go:89] found id: ""
	I1225 13:31:13.026862 1483118 logs.go:284] 1 containers: [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4]
	I1225 13:31:13.026927 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.031499 1483118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:13.031576 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:13.077486 1483118 cri.go:89] found id: ""
	I1225 13:31:13.077512 1483118 logs.go:284] 0 containers: []
	W1225 13:31:13.077520 1483118 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:13.077526 1483118 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:13.077589 1483118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:13.130262 1483118 cri.go:89] found id: "f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:13.130287 1483118 cri.go:89] found id: "41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:13.130294 1483118 cri.go:89] found id: ""
	I1225 13:31:13.130305 1483118 logs.go:284] 2 containers: [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a]
	I1225 13:31:13.130364 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.138345 1483118 ssh_runner.go:195] Run: which crictl
	I1225 13:31:13.142749 1483118 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:13.142780 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:13.264652 1483118 logs.go:123] Gathering logs for kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] ...
	I1225 13:31:13.264694 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f"
	I1225 13:31:13.315138 1483118 logs.go:123] Gathering logs for etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] ...
	I1225 13:31:13.315182 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0"
	I1225 13:31:13.375532 1483118 logs.go:123] Gathering logs for storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] ...
	I1225 13:31:13.375570 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a"
	I1225 13:31:13.418188 1483118 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:13.418226 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:13.433392 1483118 logs.go:123] Gathering logs for kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] ...
	I1225 13:31:13.433423 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83"
	I1225 13:31:13.472447 1483118 logs.go:123] Gathering logs for storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] ...
	I1225 13:31:13.472481 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3"
	I1225 13:31:13.514578 1483118 logs.go:123] Gathering logs for container status ...
	I1225 13:31:13.514631 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:13.568962 1483118 logs.go:123] Gathering logs for coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] ...
	I1225 13:31:13.569001 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e"
	I1225 13:31:13.609819 1483118 logs.go:123] Gathering logs for kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] ...
	I1225 13:31:13.609864 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4"
	I1225 13:31:13.668114 1483118 logs.go:123] Gathering logs for kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] ...
	I1225 13:31:13.668160 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36"
	I1225 13:31:13.710116 1483118 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:13.710155 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:14.068484 1483118 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:14.068548 1483118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:11.629829 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:13.632277 1483946 pod_ready.go:102] pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:15.629964 1483946 pod_ready.go:81] duration metric: took 4m0.008391697s waiting for pod "metrics-server-57f55c9bc5-chnh2" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:15.629997 1483946 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:31:15.630006 1483946 pod_ready.go:38] duration metric: took 4m4.430454443s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:15.630022 1483946 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:31:15.630052 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:15.630113 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:15.694629 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:15.694654 1483946 cri.go:89] found id: ""
	I1225 13:31:15.694666 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:15.694735 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.699777 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:15.699847 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:15.744267 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:15.744299 1483946 cri.go:89] found id: ""
	I1225 13:31:15.744308 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:15.744361 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.749213 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:15.749310 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:15.796903 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:15.796930 1483946 cri.go:89] found id: ""
	I1225 13:31:15.796939 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:15.797001 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.801601 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:15.801673 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:15.841792 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:15.841820 1483946 cri.go:89] found id: ""
	I1225 13:31:15.841830 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:15.841902 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.845893 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:15.845970 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:15.901462 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:15.901493 1483946 cri.go:89] found id: ""
	I1225 13:31:15.901505 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:15.901589 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.907173 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:15.907264 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:15.957143 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:15.957177 1483946 cri.go:89] found id: ""
	I1225 13:31:15.957186 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:15.957239 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:15.962715 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:15.962789 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:16.007949 1483946 cri.go:89] found id: ""
	I1225 13:31:16.007988 1483946 logs.go:284] 0 containers: []
	W1225 13:31:16.007999 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:16.008008 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:16.008076 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:16.063958 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:16.063984 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:16.063989 1483946 cri.go:89] found id: ""
	I1225 13:31:16.063997 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:16.064052 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:16.069193 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:16.074310 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:16.074333 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:16.120318 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:16.120363 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:16.176217 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:16.176264 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:16.633470 1483118 system_pods.go:59] 8 kube-system pods found
	I1225 13:31:16.633507 1483118 system_pods.go:61] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running
	I1225 13:31:16.633512 1483118 system_pods.go:61] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running
	I1225 13:31:16.633516 1483118 system_pods.go:61] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running
	I1225 13:31:16.633521 1483118 system_pods.go:61] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running
	I1225 13:31:16.633525 1483118 system_pods.go:61] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running
	I1225 13:31:16.633529 1483118 system_pods.go:61] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running
	I1225 13:31:16.633536 1483118 system_pods.go:61] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:16.633541 1483118 system_pods.go:61] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running
	I1225 13:31:16.633548 1483118 system_pods.go:74] duration metric: took 3.938745899s to wait for pod list to return data ...
	I1225 13:31:16.633556 1483118 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:31:16.637279 1483118 default_sa.go:45] found service account: "default"
	I1225 13:31:16.637314 1483118 default_sa.go:55] duration metric: took 3.749637ms for default service account to be created ...
	I1225 13:31:16.637325 1483118 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:31:16.644466 1483118 system_pods.go:86] 8 kube-system pods found
	I1225 13:31:16.644501 1483118 system_pods.go:89] "coredns-76f75df574-pwk9h" [5856ad8d-6c49-4225-8890-4c912f839ec6] Running
	I1225 13:31:16.644509 1483118 system_pods.go:89] "etcd-no-preload-330063" [9cd731b1-4b30-417c-8679-7080c46f0446] Running
	I1225 13:31:16.644516 1483118 system_pods.go:89] "kube-apiserver-no-preload-330063" [cb3afd61-b997-4aaa-bda5-c3b0a9544474] Running
	I1225 13:31:16.644523 1483118 system_pods.go:89] "kube-controller-manager-no-preload-330063" [dbacd4a1-b826-4ed6-8c05-c94243133f1a] Running
	I1225 13:31:16.644530 1483118 system_pods.go:89] "kube-proxy-jbch6" [af021a36-09e9-4fba-8f23-cef46ed82aa8] Running
	I1225 13:31:16.644536 1483118 system_pods.go:89] "kube-scheduler-no-preload-330063" [84b62a51-b7bb-4d51-a2f9-f675564df134] Running
	I1225 13:31:16.644547 1483118 system_pods.go:89] "metrics-server-57f55c9bc5-q97kl" [4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:16.644558 1483118 system_pods.go:89] "storage-provisioner" [7097decf-3a19-454b-9c87-df6cb2da4de4] Running
	I1225 13:31:16.644583 1483118 system_pods.go:126] duration metric: took 7.250639ms to wait for k8s-apps to be running ...
	I1225 13:31:16.644594 1483118 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:31:16.644658 1483118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:16.661680 1483118 system_svc.go:56] duration metric: took 17.070893ms WaitForService to wait for kubelet.
	I1225 13:31:16.661723 1483118 kubeadm.go:581] duration metric: took 4m22.80360778s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:31:16.661754 1483118 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:31:16.666189 1483118 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:31:16.666227 1483118 node_conditions.go:123] node cpu capacity is 2
	I1225 13:31:16.666294 1483118 node_conditions.go:105] duration metric: took 4.531137ms to run NodePressure ...
	I1225 13:31:16.666313 1483118 start.go:228] waiting for startup goroutines ...
	I1225 13:31:16.666323 1483118 start.go:233] waiting for cluster config update ...
	I1225 13:31:16.666338 1483118 start.go:242] writing updated cluster config ...
	I1225 13:31:16.666702 1483118 ssh_runner.go:195] Run: rm -f paused
	I1225 13:31:16.729077 1483118 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I1225 13:31:16.732824 1483118 out.go:177] * Done! kubectl is now configured to use "no-preload-330063" cluster and "default" namespace by default
	I1225 13:31:16.368392 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:18.374788 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:16.686611 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:16.686650 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:16.748667 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:16.748705 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:16.937661 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:16.937700 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:16.988870 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:16.988908 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:17.048278 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:17.048316 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:17.095857 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:17.095900 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:17.135425 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:17.135460 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:17.197626 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:17.197670 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:17.213658 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:17.213695 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:17.282101 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:17.282149 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:19.824939 1483946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:31:19.840944 1483946 api_server.go:72] duration metric: took 4m11.866743679s to wait for apiserver process to appear ...
	I1225 13:31:19.840985 1483946 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:31:19.841036 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:19.841114 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:19.895404 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:19.895445 1483946 cri.go:89] found id: ""
	I1225 13:31:19.895455 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:19.895519 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.900604 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:19.900686 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:19.943623 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:19.943652 1483946 cri.go:89] found id: ""
	I1225 13:31:19.943662 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:19.943728 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.948230 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:19.948298 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:19.993271 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:19.993296 1483946 cri.go:89] found id: ""
	I1225 13:31:19.993304 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:19.993355 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:19.997702 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:19.997790 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:20.043487 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:20.043514 1483946 cri.go:89] found id: ""
	I1225 13:31:20.043525 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:20.043591 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.047665 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:20.047748 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:20.091832 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:20.091867 1483946 cri.go:89] found id: ""
	I1225 13:31:20.091878 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:20.091947 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.096400 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:20.096463 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:20.136753 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:20.136785 1483946 cri.go:89] found id: ""
	I1225 13:31:20.136794 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:20.136867 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.141479 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:20.141559 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:20.184635 1483946 cri.go:89] found id: ""
	I1225 13:31:20.184677 1483946 logs.go:284] 0 containers: []
	W1225 13:31:20.184688 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:20.184694 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:20.184770 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:20.231891 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:20.231918 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:20.231923 1483946 cri.go:89] found id: ""
	I1225 13:31:20.231932 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:20.231991 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.236669 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:20.240776 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:20.240804 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:20.305411 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:20.305479 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:20.376688 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:20.376729 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:20.419016 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:20.419060 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:20.465253 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:20.465288 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:20.505949 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:20.505994 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:20.565939 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:20.565995 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:20.608765 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:20.608798 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:20.646031 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:20.646076 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:20.694772 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:20.694812 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:20.710038 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:20.710074 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:20.841944 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:20.841996 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:21.267824 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:21.267884 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:20.869158 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:22.870463 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:23.834749 1483946 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I1225 13:31:23.840763 1483946 api_server.go:279] https://192.168.50.179:8443/healthz returned 200:
	ok
	I1225 13:31:23.842396 1483946 api_server.go:141] control plane version: v1.28.4
	I1225 13:31:23.842424 1483946 api_server.go:131] duration metric: took 4.001431078s to wait for apiserver health ...
	I1225 13:31:23.842451 1483946 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:31:23.842481 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:31:23.842535 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:31:23.901377 1483946 cri.go:89] found id: "5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:23.901409 1483946 cri.go:89] found id: ""
	I1225 13:31:23.901420 1483946 logs.go:284] 1 containers: [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df]
	I1225 13:31:23.901489 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:23.906312 1483946 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:31:23.906382 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:31:23.957073 1483946 cri.go:89] found id: "9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:23.957105 1483946 cri.go:89] found id: ""
	I1225 13:31:23.957115 1483946 logs.go:284] 1 containers: [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e]
	I1225 13:31:23.957175 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:23.961899 1483946 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:31:23.961968 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:31:24.009529 1483946 cri.go:89] found id: "ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:24.009575 1483946 cri.go:89] found id: ""
	I1225 13:31:24.009587 1483946 logs.go:284] 1 containers: [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4]
	I1225 13:31:24.009656 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.014579 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:31:24.014668 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:31:24.059589 1483946 cri.go:89] found id: "868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:24.059618 1483946 cri.go:89] found id: ""
	I1225 13:31:24.059629 1483946 logs.go:284] 1 containers: [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480]
	I1225 13:31:24.059698 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.065185 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:31:24.065265 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:31:24.123904 1483946 cri.go:89] found id: "5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:24.123932 1483946 cri.go:89] found id: ""
	I1225 13:31:24.123942 1483946 logs.go:284] 1 containers: [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6]
	I1225 13:31:24.124006 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.128753 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:31:24.128849 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:31:24.172259 1483946 cri.go:89] found id: "e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:24.172285 1483946 cri.go:89] found id: ""
	I1225 13:31:24.172296 1483946 logs.go:284] 1 containers: [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0]
	I1225 13:31:24.172363 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.177276 1483946 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:31:24.177356 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:31:24.223415 1483946 cri.go:89] found id: ""
	I1225 13:31:24.223445 1483946 logs.go:284] 0 containers: []
	W1225 13:31:24.223453 1483946 logs.go:286] No container was found matching "kindnet"
	I1225 13:31:24.223459 1483946 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:31:24.223516 1483946 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:31:24.267840 1483946 cri.go:89] found id: "0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:24.267866 1483946 cri.go:89] found id: "03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:24.267870 1483946 cri.go:89] found id: ""
	I1225 13:31:24.267878 1483946 logs.go:284] 2 containers: [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7]
	I1225 13:31:24.267939 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.272947 1483946 ssh_runner.go:195] Run: which crictl
	I1225 13:31:24.279183 1483946 logs.go:123] Gathering logs for kubelet ...
	I1225 13:31:24.279213 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 13:31:24.343548 1483946 logs.go:123] Gathering logs for container status ...
	I1225 13:31:24.343592 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:31:24.398275 1483946 logs.go:123] Gathering logs for kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] ...
	I1225 13:31:24.398312 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6"
	I1225 13:31:24.443435 1483946 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:31:24.443472 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:31:24.814711 1483946 logs.go:123] Gathering logs for dmesg ...
	I1225 13:31:24.814770 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:31:24.828613 1483946 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:31:24.828649 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:31:24.979501 1483946 logs.go:123] Gathering logs for coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] ...
	I1225 13:31:24.979538 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4"
	I1225 13:31:25.028976 1483946 logs.go:123] Gathering logs for kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] ...
	I1225 13:31:25.029011 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480"
	I1225 13:31:25.083148 1483946 logs.go:123] Gathering logs for kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] ...
	I1225 13:31:25.083191 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df"
	I1225 13:31:25.155284 1483946 logs.go:123] Gathering logs for etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] ...
	I1225 13:31:25.155336 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e"
	I1225 13:31:25.213437 1483946 logs.go:123] Gathering logs for storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] ...
	I1225 13:31:25.213483 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751"
	I1225 13:31:25.260934 1483946 logs.go:123] Gathering logs for storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] ...
	I1225 13:31:25.260973 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7"
	I1225 13:31:25.307395 1483946 logs.go:123] Gathering logs for kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] ...
	I1225 13:31:25.307430 1483946 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0"
	I1225 13:31:27.884673 1483946 system_pods.go:59] 8 kube-system pods found
	I1225 13:31:27.884702 1483946 system_pods.go:61] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running
	I1225 13:31:27.884708 1483946 system_pods.go:61] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running
	I1225 13:31:27.884713 1483946 system_pods.go:61] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running
	I1225 13:31:27.884717 1483946 system_pods.go:61] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running
	I1225 13:31:27.884721 1483946 system_pods.go:61] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running
	I1225 13:31:27.884725 1483946 system_pods.go:61] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running
	I1225 13:31:27.884731 1483946 system_pods.go:61] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:27.884737 1483946 system_pods.go:61] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:31:27.884744 1483946 system_pods.go:74] duration metric: took 4.04228589s to wait for pod list to return data ...
	I1225 13:31:27.884752 1483946 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:31:27.889125 1483946 default_sa.go:45] found service account: "default"
	I1225 13:31:27.889156 1483946 default_sa.go:55] duration metric: took 4.397454ms for default service account to be created ...
	I1225 13:31:27.889167 1483946 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:31:27.896851 1483946 system_pods.go:86] 8 kube-system pods found
	I1225 13:31:27.896879 1483946 system_pods.go:89] "coredns-5dd5756b68-sbn7n" [1de44565-3ada-41a3-bcf0-b9229d3edab8] Running
	I1225 13:31:27.896884 1483946 system_pods.go:89] "etcd-embed-certs-880612" [70454479-0457-44b3-ab0f-d3029badbd31] Running
	I1225 13:31:27.896889 1483946 system_pods.go:89] "kube-apiserver-embed-certs-880612" [e66c5604-24b5-4e48-a8c9-3d0ce4fcc834] Running
	I1225 13:31:27.896894 1483946 system_pods.go:89] "kube-controller-manager-embed-certs-880612" [a4f659d1-5016-44a1-a265-cd8a14a7bcec] Running
	I1225 13:31:27.896898 1483946 system_pods.go:89] "kube-proxy-677d7" [5d4f790b-a982-4613-b671-c45f037503d9] Running
	I1225 13:31:27.896901 1483946 system_pods.go:89] "kube-scheduler-embed-certs-880612" [07aafbf2-4696-4234-86a5-255f94fa7d86] Running
	I1225 13:31:27.896908 1483946 system_pods.go:89] "metrics-server-57f55c9bc5-chnh2" [5a0bb4ec-4652-4e5a-9da4-3ce126a4be11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:31:27.896912 1483946 system_pods.go:89] "storage-provisioner" [34fa49ce-c807-4f30-9be6-317676447640] Running
	I1225 13:31:27.896920 1483946 system_pods.go:126] duration metric: took 7.747348ms to wait for k8s-apps to be running ...
	I1225 13:31:27.896929 1483946 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:31:27.896981 1483946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:27.917505 1483946 system_svc.go:56] duration metric: took 20.559839ms WaitForService to wait for kubelet.
	I1225 13:31:27.917542 1483946 kubeadm.go:581] duration metric: took 4m19.94335169s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:31:27.917568 1483946 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:31:27.921689 1483946 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:31:27.921715 1483946 node_conditions.go:123] node cpu capacity is 2
	I1225 13:31:27.921797 1483946 node_conditions.go:105] duration metric: took 4.219723ms to run NodePressure ...
	I1225 13:31:27.921814 1483946 start.go:228] waiting for startup goroutines ...
	I1225 13:31:27.921825 1483946 start.go:233] waiting for cluster config update ...
	I1225 13:31:27.921838 1483946 start.go:242] writing updated cluster config ...
	I1225 13:31:27.922130 1483946 ssh_runner.go:195] Run: rm -f paused
	I1225 13:31:27.976011 1483946 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 13:31:27.978077 1483946 out.go:177] * Done! kubectl is now configured to use "embed-certs-880612" cluster and "default" namespace by default
	I1225 13:31:24.870628 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:26.873379 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:29.367512 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:31.367730 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:33.867551 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace has status "Ready":"False"
	I1225 13:31:36.360292 1484104 pod_ready.go:81] duration metric: took 4m0.000407846s waiting for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" ...
	E1225 13:31:36.360349 1484104 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-zgrj5" in "kube-system" namespace to be "Ready" (will not retry!)
	I1225 13:31:36.360378 1484104 pod_ready.go:38] duration metric: took 4m12.556234617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:31:36.360445 1484104 kubeadm.go:640] restartCluster took 4m32.941510355s
	W1225 13:31:36.360540 1484104 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1225 13:31:36.360578 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1225 13:31:50.552320 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.191703988s)
	I1225 13:31:50.552417 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:31:50.569621 1484104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:31:50.581050 1484104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:31:50.591777 1484104 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:31:50.591837 1484104 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1225 13:31:50.651874 1484104 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1225 13:31:50.651952 1484104 kubeadm.go:322] [preflight] Running pre-flight checks
	I1225 13:31:50.822009 1484104 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 13:31:50.822174 1484104 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 13:31:50.822258 1484104 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1225 13:31:51.074237 1484104 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 13:31:51.077463 1484104 out.go:204]   - Generating certificates and keys ...
	I1225 13:31:51.077575 1484104 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1225 13:31:51.077637 1484104 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1225 13:31:51.077703 1484104 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1225 13:31:51.077755 1484104 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1225 13:31:51.077816 1484104 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1225 13:31:51.077908 1484104 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1225 13:31:51.078059 1484104 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1225 13:31:51.078715 1484104 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1225 13:31:51.079408 1484104 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1225 13:31:51.080169 1484104 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1225 13:31:51.080635 1484104 kubeadm.go:322] [certs] Using the existing "sa" key
	I1225 13:31:51.080724 1484104 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 13:31:51.147373 1484104 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 13:31:51.298473 1484104 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 13:31:51.403869 1484104 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 13:31:51.719828 1484104 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 13:31:51.720523 1484104 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 13:31:51.725276 1484104 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 13:31:51.727100 1484104 out.go:204]   - Booting up control plane ...
	I1225 13:31:51.727248 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 13:31:51.727343 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 13:31:51.727431 1484104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 13:31:51.745500 1484104 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 13:31:51.746331 1484104 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 13:31:51.746392 1484104 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1225 13:31:51.897052 1484104 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1225 13:32:00.401261 1484104 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504339 seconds
	I1225 13:32:00.401463 1484104 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 13:32:00.422010 1484104 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 13:32:00.962174 1484104 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 13:32:00.962418 1484104 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-344803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 13:32:01.479956 1484104 kubeadm.go:322] [bootstrap-token] Using token: 7n7qlp.3wejtqrgqunjtf8y
	I1225 13:32:01.481699 1484104 out.go:204]   - Configuring RBAC rules ...
	I1225 13:32:01.481862 1484104 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 13:32:01.489709 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 13:32:01.499287 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 13:32:01.504520 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 13:32:01.508950 1484104 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 13:32:01.517277 1484104 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 13:32:01.537420 1484104 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 13:32:01.820439 1484104 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1225 13:32:01.897010 1484104 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1225 13:32:01.897039 1484104 kubeadm.go:322] 
	I1225 13:32:01.897139 1484104 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1225 13:32:01.897169 1484104 kubeadm.go:322] 
	I1225 13:32:01.897259 1484104 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1225 13:32:01.897270 1484104 kubeadm.go:322] 
	I1225 13:32:01.897292 1484104 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1225 13:32:01.897383 1484104 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 13:32:01.897471 1484104 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 13:32:01.897484 1484104 kubeadm.go:322] 
	I1225 13:32:01.897558 1484104 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1225 13:32:01.897568 1484104 kubeadm.go:322] 
	I1225 13:32:01.897621 1484104 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 13:32:01.897629 1484104 kubeadm.go:322] 
	I1225 13:32:01.897702 1484104 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1225 13:32:01.897822 1484104 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 13:32:01.897923 1484104 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 13:32:01.897935 1484104 kubeadm.go:322] 
	I1225 13:32:01.898040 1484104 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 13:32:01.898141 1484104 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1225 13:32:01.898156 1484104 kubeadm.go:322] 
	I1225 13:32:01.898264 1484104 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 7n7qlp.3wejtqrgqunjtf8y \
	I1225 13:32:01.898455 1484104 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 \
	I1225 13:32:01.898506 1484104 kubeadm.go:322] 	--control-plane 
	I1225 13:32:01.898516 1484104 kubeadm.go:322] 
	I1225 13:32:01.898627 1484104 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1225 13:32:01.898645 1484104 kubeadm.go:322] 
	I1225 13:32:01.898760 1484104 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 7n7qlp.3wejtqrgqunjtf8y \
	I1225 13:32:01.898898 1484104 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 13:32:01.899552 1484104 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 13:32:01.899699 1484104 cni.go:84] Creating CNI manager for ""
	I1225 13:32:01.899720 1484104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:32:01.902817 1484104 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:32:01.904375 1484104 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:32:01.943752 1484104 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:32:02.004751 1484104 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:32:02.004915 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da minikube.k8s.io/name=default-k8s-diff-port-344803 minikube.k8s.io/updated_at=2023_12_25T13_32_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.004920 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.377800 1484104 ops.go:34] apiserver oom_adj: -16
	I1225 13:32:02.378388 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:02.879083 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:03.379453 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:03.878676 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:04.378589 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:04.878630 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:05.378615 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:05.879009 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:06.379100 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:06.878610 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:07.378604 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:07.878597 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:08.379427 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:08.878637 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:09.378638 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:09.879200 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:10.378659 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:10.879285 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:11.378603 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:11.878605 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:12.379451 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:12.879431 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:13.379034 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:13.878468 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:14.378592 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:14.878569 1484104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:32:15.008581 1484104 kubeadm.go:1088] duration metric: took 13.00372954s to wait for elevateKubeSystemPrivileges.
	I1225 13:32:15.008626 1484104 kubeadm.go:406] StartCluster complete in 5m11.652335467s
	I1225 13:32:15.008653 1484104 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:32:15.008763 1484104 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:32:15.011655 1484104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:32:15.011982 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:32:15.012172 1484104 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:32:15.012258 1484104 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012285 1484104 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.012297 1484104 addons.go:246] addon storage-provisioner should already be in state true
	I1225 13:32:15.012311 1484104 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012347 1484104 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-344803"
	I1225 13:32:15.012363 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.012798 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.012800 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.012831 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.012833 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.012898 1484104 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-344803"
	I1225 13:32:15.012912 1484104 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.012919 1484104 addons.go:246] addon metrics-server should already be in state true
	I1225 13:32:15.012961 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.012972 1484104 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:32:15.013289 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.013318 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.032424 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I1225 13:32:15.032981 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44439
	I1225 13:32:15.033180 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1225 13:32:15.033455 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.033575 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.033623 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.034052 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034069 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034173 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034195 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034209 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.034238 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.034412 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034635 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034693 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.034728 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.036190 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.036205 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.036228 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.036229 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.040383 1484104 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-344803"
	W1225 13:32:15.040442 1484104 addons.go:246] addon default-storageclass should already be in state true
	I1225 13:32:15.040473 1484104 host.go:66] Checking if "default-k8s-diff-port-344803" exists ...
	I1225 13:32:15.040780 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.040820 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.055366 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
	I1225 13:32:15.055979 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.056596 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.056623 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.056646 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I1225 13:32:15.056646 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41689
	I1225 13:32:15.057067 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.057205 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.057218 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.057413 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.057741 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.057768 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.057958 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.058013 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.058122 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.058413 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.058776 1484104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:32:15.058816 1484104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:32:15.059142 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.059588 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.061854 1484104 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:32:15.060849 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.063569 1484104 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:32:15.063593 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:32:15.065174 1484104 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1225 13:32:15.063622 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.066654 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 13:32:15.066677 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 13:32:15.066700 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.071209 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.071244 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.071995 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.072039 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.072074 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.072089 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.072244 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.072319 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.072500 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.072558 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.072875 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.072941 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.073085 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.073138 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.077927 1484104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38519
	I1225 13:32:15.078428 1484104 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:32:15.079241 1484104 main.go:141] libmachine: Using API Version  1
	I1225 13:32:15.079262 1484104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:32:15.079775 1484104 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:32:15.079983 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetState
	I1225 13:32:15.081656 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .DriverName
	I1225 13:32:15.082002 1484104 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:32:15.082024 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:32:15.082047 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHHostname
	I1225 13:32:15.085367 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.085779 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:85:71", ip: ""} in network mk-default-k8s-diff-port-344803: {Iface:virbr1 ExpiryTime:2023-12-25 14:26:47 +0000 UTC Type:0 Mac:52:54:00:80:85:71 Iaid: IPaddr:192.168.61.39 Prefix:24 Hostname:default-k8s-diff-port-344803 Clientid:01:52:54:00:80:85:71}
	I1225 13:32:15.085805 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | domain default-k8s-diff-port-344803 has defined IP address 192.168.61.39 and MAC address 52:54:00:80:85:71 in network mk-default-k8s-diff-port-344803
	I1225 13:32:15.086119 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHPort
	I1225 13:32:15.086390 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHKeyPath
	I1225 13:32:15.086656 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .GetSSHUsername
	I1225 13:32:15.086875 1484104 sshutil.go:53] new ssh client: &{IP:192.168.61.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/default-k8s-diff-port-344803/id_rsa Username:docker}
	I1225 13:32:15.262443 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 13:32:15.262470 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1225 13:32:15.270730 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 13:32:15.285178 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:32:15.302070 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 13:32:15.302097 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 13:32:15.303686 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:32:15.373021 1484104 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:32:15.373054 1484104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 13:32:15.461862 1484104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 13:32:15.518928 1484104 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-344803" context rescaled to 1 replicas
	I1225 13:32:15.518973 1484104 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:32:15.520858 1484104 out.go:177] * Verifying Kubernetes components...
	I1225 13:32:15.522326 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:32:16.993620 1484104 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.72284687s)
	I1225 13:32:16.993667 1484104 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1225 13:32:17.329206 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.025471574s)
	I1225 13:32:17.329305 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329321 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329352 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.044135646s)
	I1225 13:32:17.329411 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329430 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329697 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.329722 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.329737 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329747 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.329764 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.329740 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.329805 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.329825 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.329838 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.331647 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.331675 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.331706 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.331715 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.331734 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.331766 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.350031 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.350068 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.350458 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.350499 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.350516 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.582723 1484104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.120815372s)
	I1225 13:32:17.582785 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.582798 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.582787 1484104 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.060422325s)
	I1225 13:32:17.582838 1484104 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-344803" to be "Ready" ...
	I1225 13:32:17.583145 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.583172 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) DBG | Closing plugin on server side
	I1225 13:32:17.583179 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.583192 1484104 main.go:141] libmachine: Making call to close driver server
	I1225 13:32:17.583201 1484104 main.go:141] libmachine: (default-k8s-diff-port-344803) Calling .Close
	I1225 13:32:17.583438 1484104 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:32:17.583461 1484104 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:32:17.583471 1484104 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-344803"
	I1225 13:32:17.585288 1484104 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1225 13:32:17.586537 1484104 addons.go:508] enable addons completed in 2.574365441s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1225 13:32:17.595130 1484104 node_ready.go:49] node "default-k8s-diff-port-344803" has status "Ready":"True"
	I1225 13:32:17.595165 1484104 node_ready.go:38] duration metric: took 12.307997ms waiting for node "default-k8s-diff-port-344803" to be "Ready" ...
	I1225 13:32:17.595181 1484104 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:32:17.613099 1484104 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:19.621252 1484104 pod_ready.go:102] pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:20.621494 1484104 pod_ready.go:92] pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.621519 1484104 pod_ready.go:81] duration metric: took 3.008379569s waiting for pod "coredns-5dd5756b68-rbmbs" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.621528 1484104 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.630348 1484104 pod_ready.go:92] pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.630375 1484104 pod_ready.go:81] duration metric: took 8.841316ms waiting for pod "etcd-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.630387 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.636928 1484104 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.636953 1484104 pod_ready.go:81] duration metric: took 6.558203ms waiting for pod "kube-apiserver-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.636963 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.643335 1484104 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.643360 1484104 pod_ready.go:81] duration metric: took 6.390339ms waiting for pod "kube-controller-manager-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.643369 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpk9s" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.649496 1484104 pod_ready.go:92] pod "kube-proxy-fpk9s" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:20.649526 1484104 pod_ready.go:81] duration metric: took 6.150243ms waiting for pod "kube-proxy-fpk9s" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:20.649535 1484104 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:21.018065 1484104 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace has status "Ready":"True"
	I1225 13:32:21.018092 1484104 pod_ready.go:81] duration metric: took 368.549291ms waiting for pod "kube-scheduler-default-k8s-diff-port-344803" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:21.018102 1484104 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace to be "Ready" ...
	I1225 13:32:23.026953 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:25.525822 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:27.530780 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:30.033601 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:32.528694 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:34.529208 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:37.028717 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:39.526632 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:42.026868 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:44.028002 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:46.526534 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:48.529899 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:51.026062 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:53.525655 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:55.526096 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:32:58.026355 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:00.026674 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:02.029299 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:04.526609 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:06.526810 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:09.026498 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:11.026612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:13.029416 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:15.526242 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:18.026664 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:20.529125 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:23.026694 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:25.029350 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:27.527537 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:30.030562 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:32.526381 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:34.526801 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:37.027939 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:39.526249 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:41.526511 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:43.526783 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:45.527693 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:48.026703 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:50.027582 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:52.526290 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:55.027458 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:57.526559 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:33:59.526699 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:01.527938 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:03.529353 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:06.025942 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:08.027340 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:10.028087 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:12.525688 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:14.527122 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:16.529380 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:19.026128 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:21.026183 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:23.027208 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:25.526282 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:27.531847 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:30.030025 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:32.526291 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:34.526470 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:36.527179 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:39.026270 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:41.029609 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:43.528905 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:46.026666 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:48.528560 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:51.025864 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:53.027211 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:55.527359 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:34:58.025696 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:00.027368 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:02.027605 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:04.525836 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:06.526571 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:08.528550 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:11.026765 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:13.028215 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:15.525903 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:17.527102 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:20.026011 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:22.525873 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:24.528380 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:27.026402 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:29.527869 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:32.026671 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:34.026737 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:36.026836 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:38.526788 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:41.027387 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:43.526936 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:46.026316 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:48.026940 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:50.526565 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:53.025988 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:55.027146 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:35:57.527287 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:00.028971 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:02.526704 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:05.025995 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:07.026612 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:09.027839 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:11.526845 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:13.527737 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:16.026967 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:18.028747 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:20.527437 1484104 pod_ready.go:102] pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace has status "Ready":"False"
	I1225 13:36:21.027372 1484104 pod_ready.go:81] duration metric: took 4m0.009244403s waiting for pod "metrics-server-57f55c9bc5-slv7p" in "kube-system" namespace to be "Ready" ...
	E1225 13:36:21.027405 1484104 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1225 13:36:21.027418 1484104 pod_ready.go:38] duration metric: took 4m3.432224558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1225 13:36:21.027474 1484104 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:36:21.027560 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:21.027806 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:21.090421 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:21.090464 1484104 cri.go:89] found id: ""
	I1225 13:36:21.090474 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:21.090526 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.095523 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:21.095605 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:21.139092 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:21.139126 1484104 cri.go:89] found id: ""
	I1225 13:36:21.139136 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:21.139206 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.143957 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:21.144038 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:21.190905 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:21.190937 1484104 cri.go:89] found id: ""
	I1225 13:36:21.190948 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:21.191018 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.195814 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:21.195882 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:21.240274 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:21.240307 1484104 cri.go:89] found id: ""
	I1225 13:36:21.240317 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:21.240384 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.244831 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:21.244930 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:21.289367 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:21.289399 1484104 cri.go:89] found id: ""
	I1225 13:36:21.289410 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:21.289478 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.293796 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:21.293878 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:21.338757 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:21.338789 1484104 cri.go:89] found id: ""
	I1225 13:36:21.338808 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:21.338878 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.343145 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:21.343217 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:21.384898 1484104 cri.go:89] found id: ""
	I1225 13:36:21.384929 1484104 logs.go:284] 0 containers: []
	W1225 13:36:21.384936 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:21.384943 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:21.385006 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:21.436776 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:21.436809 1484104 cri.go:89] found id: ""
	I1225 13:36:21.436818 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:21.436871 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:21.442173 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:21.442210 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:21.886890 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:21.886944 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:21.971380 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:21.971568 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:21.992672 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:21.992724 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:22.015144 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:22.015198 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:22.195011 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:22.195060 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:22.237377 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:22.237423 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:22.284207 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:22.284240 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:22.343882 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:22.343939 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:22.404320 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:22.404356 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:22.465126 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:22.465175 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:22.521920 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:22.521963 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:22.575563 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:22.575601 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:22.627508 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:22.627549 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:22.627808 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:22.627849 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:22.627862 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:22.627871 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:22.627882 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:32.629903 1484104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:36:32.648435 1484104 api_server.go:72] duration metric: took 4m17.129427556s to wait for apiserver process to appear ...
	I1225 13:36:32.648461 1484104 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:36:32.648499 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:32.648567 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:32.705637 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:32.705673 1484104 cri.go:89] found id: ""
	I1225 13:36:32.705685 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:32.705754 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.710516 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:32.710591 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:32.757193 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:32.757225 1484104 cri.go:89] found id: ""
	I1225 13:36:32.757236 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:32.757302 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.762255 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:32.762335 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:32.812666 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:32.812692 1484104 cri.go:89] found id: ""
	I1225 13:36:32.812703 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:32.812758 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.817599 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:32.817676 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:32.861969 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:32.862011 1484104 cri.go:89] found id: ""
	I1225 13:36:32.862021 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:32.862084 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.868439 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:32.868525 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:32.929969 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:32.930006 1484104 cri.go:89] found id: ""
	I1225 13:36:32.930015 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:32.930077 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.936071 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:32.936149 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:32.980256 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:32.980280 1484104 cri.go:89] found id: ""
	I1225 13:36:32.980288 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:32.980345 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:32.985508 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:32.985605 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:33.029393 1484104 cri.go:89] found id: ""
	I1225 13:36:33.029429 1484104 logs.go:284] 0 containers: []
	W1225 13:36:33.029440 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:33.029448 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:33.029521 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:33.075129 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:33.075156 1484104 cri.go:89] found id: ""
	I1225 13:36:33.075167 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:33.075229 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:33.079900 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:33.079940 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:33.121355 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:33.121391 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:33.205175 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:33.205394 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:33.225359 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:33.225393 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:33.282658 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:33.282710 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:33.334586 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:33.334627 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:33.383538 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:33.383576 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:33.438245 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:33.438284 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:33.487260 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:33.487305 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:33.504627 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:33.504665 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:33.641875 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:33.641912 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:33.692275 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:33.692311 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:33.731932 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:33.731971 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:34.081286 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:34.081325 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:34.081438 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:34.081456 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:34.081465 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:34.081477 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:34.081490 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:44.083633 1484104 api_server.go:253] Checking apiserver healthz at https://192.168.61.39:8444/healthz ...
	I1225 13:36:44.091721 1484104 api_server.go:279] https://192.168.61.39:8444/healthz returned 200:
	ok
	I1225 13:36:44.093215 1484104 api_server.go:141] control plane version: v1.28.4
	I1225 13:36:44.093242 1484104 api_server.go:131] duration metric: took 11.444775391s to wait for apiserver health ...
	I1225 13:36:44.093251 1484104 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:36:44.093279 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 13:36:44.093330 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1225 13:36:44.135179 1484104 cri.go:89] found id: "3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:44.135212 1484104 cri.go:89] found id: ""
	I1225 13:36:44.135229 1484104 logs.go:284] 1 containers: [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca]
	I1225 13:36:44.135308 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.140367 1484104 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 13:36:44.140455 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1225 13:36:44.179525 1484104 cri.go:89] found id: "94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:44.179557 1484104 cri.go:89] found id: ""
	I1225 13:36:44.179568 1484104 logs.go:284] 1 containers: [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f]
	I1225 13:36:44.179644 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.184724 1484104 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 13:36:44.184822 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1225 13:36:44.225306 1484104 cri.go:89] found id: "667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:44.225339 1484104 cri.go:89] found id: ""
	I1225 13:36:44.225351 1484104 logs.go:284] 1 containers: [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd]
	I1225 13:36:44.225418 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.230354 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 13:36:44.230459 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1225 13:36:44.272270 1484104 cri.go:89] found id: "935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:44.272300 1484104 cri.go:89] found id: ""
	I1225 13:36:44.272311 1484104 logs.go:284] 1 containers: [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13]
	I1225 13:36:44.272387 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.277110 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 13:36:44.277187 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1225 13:36:44.326495 1484104 cri.go:89] found id: "09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:44.326519 1484104 cri.go:89] found id: ""
	I1225 13:36:44.326527 1484104 logs.go:284] 1 containers: [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3]
	I1225 13:36:44.326579 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.333707 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 13:36:44.333799 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1225 13:36:44.380378 1484104 cri.go:89] found id: "3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:44.380410 1484104 cri.go:89] found id: ""
	I1225 13:36:44.380423 1484104 logs.go:284] 1 containers: [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2]
	I1225 13:36:44.380488 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.390075 1484104 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 13:36:44.390171 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1225 13:36:44.440171 1484104 cri.go:89] found id: ""
	I1225 13:36:44.440211 1484104 logs.go:284] 0 containers: []
	W1225 13:36:44.440223 1484104 logs.go:286] No container was found matching "kindnet"
	I1225 13:36:44.440233 1484104 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 13:36:44.440321 1484104 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1225 13:36:44.482074 1484104 cri.go:89] found id: "2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:44.482104 1484104 cri.go:89] found id: ""
	I1225 13:36:44.482114 1484104 logs.go:284] 1 containers: [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8]
	I1225 13:36:44.482178 1484104 ssh_runner.go:195] Run: which crictl
	I1225 13:36:44.487171 1484104 logs.go:123] Gathering logs for kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] ...
	I1225 13:36:44.487209 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3"
	I1225 13:36:44.532144 1484104 logs.go:123] Gathering logs for CRI-O ...
	I1225 13:36:44.532179 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 13:36:44.891521 1484104 logs.go:123] Gathering logs for container status ...
	I1225 13:36:44.891568 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 13:36:44.938934 1484104 logs.go:123] Gathering logs for kubelet ...
	I1225 13:36:44.938967 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1225 13:36:45.017433 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:45.017627 1484104 logs.go:138] Found kubelet problem: Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:45.039058 1484104 logs.go:123] Gathering logs for dmesg ...
	I1225 13:36:45.039097 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 13:36:45.054560 1484104 logs.go:123] Gathering logs for etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] ...
	I1225 13:36:45.054592 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f"
	I1225 13:36:45.113698 1484104 logs.go:123] Gathering logs for coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] ...
	I1225 13:36:45.113735 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd"
	I1225 13:36:45.158302 1484104 logs.go:123] Gathering logs for kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] ...
	I1225 13:36:45.158342 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13"
	I1225 13:36:45.204784 1484104 logs.go:123] Gathering logs for kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] ...
	I1225 13:36:45.204824 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2"
	I1225 13:36:45.276442 1484104 logs.go:123] Gathering logs for storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] ...
	I1225 13:36:45.276483 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8"
	I1225 13:36:45.320645 1484104 logs.go:123] Gathering logs for describe nodes ...
	I1225 13:36:45.320678 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 13:36:45.452638 1484104 logs.go:123] Gathering logs for kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] ...
	I1225 13:36:45.452681 1484104 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca"
	I1225 13:36:45.500718 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:45.500757 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1225 13:36:45.500817 1484104 out.go:239] X Problems detected in kubelet:
	W1225 13:36:45.500833 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: W1225 13:32:16.663764    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	W1225 13:36:45.500844 1484104 out.go:239]   Dec 25 13:32:16 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:32:16.663823    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-344803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-344803' and this object
	I1225 13:36:45.500853 1484104 out.go:309] Setting ErrFile to fd 2...
	I1225 13:36:45.500859 1484104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:36:55.510930 1484104 system_pods.go:59] 8 kube-system pods found
	I1225 13:36:55.510962 1484104 system_pods.go:61] "coredns-5dd5756b68-rbmbs" [cd5fc3c3-b9db-437d-8088-2f97921bc3bd] Running
	I1225 13:36:55.510968 1484104 system_pods.go:61] "etcd-default-k8s-diff-port-344803" [3824f946-c4e1-4e9c-a52f-3d6753ce9350] Running
	I1225 13:36:55.510973 1484104 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-344803" [81cf9f5a-6cc3-4d66-956f-6b8a4e2a1ef5] Running
	I1225 13:36:55.510977 1484104 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-344803" [b3cfc8b9-d03b-4a1e-9500-08bb08dc64f3] Running
	I1225 13:36:55.510984 1484104 system_pods.go:61] "kube-proxy-fpk9s" [17d80ffc-e149-4449-aec9-9d90a2fda282] Running
	I1225 13:36:55.510987 1484104 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-344803" [795b56ad-2ee1-45ef-8c7b-1b878be6b0d7] Running
	I1225 13:36:55.510995 1484104 system_pods.go:61] "metrics-server-57f55c9bc5-slv7p" [a51c534d-e6d8-48b9-852f-caf598c8853a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:36:55.510999 1484104 system_pods.go:61] "storage-provisioner" [4bee5e6e-1252-4b3d-8d6c-73515d8567e4] Running
	I1225 13:36:55.511014 1484104 system_pods.go:74] duration metric: took 11.417757674s to wait for pod list to return data ...
	I1225 13:36:55.511025 1484104 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:36:55.514087 1484104 default_sa.go:45] found service account: "default"
	I1225 13:36:55.514112 1484104 default_sa.go:55] duration metric: took 3.081452ms for default service account to be created ...
	I1225 13:36:55.514120 1484104 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 13:36:55.521321 1484104 system_pods.go:86] 8 kube-system pods found
	I1225 13:36:55.521355 1484104 system_pods.go:89] "coredns-5dd5756b68-rbmbs" [cd5fc3c3-b9db-437d-8088-2f97921bc3bd] Running
	I1225 13:36:55.521365 1484104 system_pods.go:89] "etcd-default-k8s-diff-port-344803" [3824f946-c4e1-4e9c-a52f-3d6753ce9350] Running
	I1225 13:36:55.521370 1484104 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-344803" [81cf9f5a-6cc3-4d66-956f-6b8a4e2a1ef5] Running
	I1225 13:36:55.521375 1484104 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-344803" [b3cfc8b9-d03b-4a1e-9500-08bb08dc64f3] Running
	I1225 13:36:55.521380 1484104 system_pods.go:89] "kube-proxy-fpk9s" [17d80ffc-e149-4449-aec9-9d90a2fda282] Running
	I1225 13:36:55.521387 1484104 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-344803" [795b56ad-2ee1-45ef-8c7b-1b878be6b0d7] Running
	I1225 13:36:55.521397 1484104 system_pods.go:89] "metrics-server-57f55c9bc5-slv7p" [a51c534d-e6d8-48b9-852f-caf598c8853a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 13:36:55.521409 1484104 system_pods.go:89] "storage-provisioner" [4bee5e6e-1252-4b3d-8d6c-73515d8567e4] Running
	I1225 13:36:55.521421 1484104 system_pods.go:126] duration metric: took 7.294824ms to wait for k8s-apps to be running ...
	I1225 13:36:55.521433 1484104 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 13:36:55.521492 1484104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:36:55.540217 1484104 system_svc.go:56] duration metric: took 18.766893ms WaitForService to wait for kubelet.
	I1225 13:36:55.540248 1484104 kubeadm.go:581] duration metric: took 4m40.021246946s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1225 13:36:55.540271 1484104 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:36:55.544519 1484104 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:36:55.544685 1484104 node_conditions.go:123] node cpu capacity is 2
	I1225 13:36:55.544742 1484104 node_conditions.go:105] duration metric: took 4.463666ms to run NodePressure ...
	I1225 13:36:55.544783 1484104 start.go:228] waiting for startup goroutines ...
	I1225 13:36:55.544795 1484104 start.go:233] waiting for cluster config update ...
	I1225 13:36:55.544810 1484104 start.go:242] writing updated cluster config ...
	I1225 13:36:55.545268 1484104 ssh_runner.go:195] Run: rm -f paused
	I1225 13:36:55.607984 1484104 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1225 13:36:55.609993 1484104 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-344803" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2023-12-25 13:27:08 UTC, ends at Mon 2023-12-25 13:45:54 UTC. --
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.838421592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511954838408608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=77722f34-0ade-4e84-b1c4-f493971f9136 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.838936278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8887ad8b-8b5c-4d9b-a09b-28b1745e04fd name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.838981315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8887ad8b-8b5c-4d9b-a09b-28b1745e04fd name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.839240281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eee04693d74189924b9622b39b08d0c1a82a39417920b95311f7e60595834201,PodSandboxId:f04ef7bd6f0a22b979f413b3c535fd53468c870473d463843ef95793417074ce,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510875378231620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: af0877b6-43de-4c64-b5ac-279fa3325551,},Annotations:map[string]string{io.kubernetes.container.hash: e9a12b27,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47cff327955c591f8e8f9d644ad6987fa073012ed055a8b8006a72ffb08c2be,PodSandboxId:ce277e6ba47cd520efeef710adb4892bcd0e2aeb73099383b9a829fbb0616f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1703510874302110639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mk9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487388f-a7b7-401e-9ce3-06fac16ddd47,},Annotations:map[string]string{io.kubernetes.container.hash: a0fe198d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf29569278accacdc63587055c7c4248270d1bf393c40fa449ac4b96f40bb0f1,PodSandboxId:b230f817f43edda50e77e7d96936601f75698b17da85fdc3672e565534e57b1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510873813284140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0d6c87f1-93ae-479b-ac0e-4623e326afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 9f8f673d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910a2a6af295b1b01f52fe18a975c267d9d105bf2eed5c4debe0d0731281c5ff,PodSandboxId:01599dd503c13b19393282a7db9edd5cbc647016900b78ba151dc284b2624654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1703510872533183297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vw9lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b7377f2-3ae6-4003-977d
-4eb3c7cd11f0,},Annotations:map[string]string{io.kubernetes.container.hash: e36b7973,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2abf03e37aac490974346ac98df0d557a7f99b8f18fa76dd29a068b9fd7fb6,PodSandboxId:da2644db835d20c701a5d61dbe793394c150b0fb9c40314bad7a93372ec157a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1703510864666002805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd98fe94865b5b85093069a662706570,},Annotations:map[string]string{io.ku
bernetes.container.hash: 107160ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af8d6cd59ab945dd2f728519f0a38639469b790ff75269c71e14d6e55212410,PodSandboxId:aa9954da2cb2ab43232ca5d8c0ffde30b97da93dd1114f70f858657cbd6d1909,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1703510863315990861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1a7d0e2b22b5770db35501a52f89ed,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2964ec56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ad453cbfd10d811941f7f5330a805c3db1e6551a186cf7fb6786d13851d6fc,PodSandboxId:f5a9d9ee3e96527f1bcfd109cefb4fd767a6091bd77b2e4cf05f05c85de07f20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1703510863194174320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.has
h: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fccd1ab3c39fefcb749e16ffc8605e841e7056f8171b0388a88d6f13ffcff2,PodSandboxId:4119b1ccf722cbd12566133e3817130461a3fd078c4734285f1fb190d73e3e5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1703510863175379403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io
.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8887ad8b-8b5c-4d9b-a09b-28b1745e04fd name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.882248520Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=313316b2-780a-4bf6-97db-2794f86b8cfb name=/runtime.v1.RuntimeService/Version
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.882309426Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=313316b2-780a-4bf6-97db-2794f86b8cfb name=/runtime.v1.RuntimeService/Version
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.883968564Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1332cde2-a2d1-4d39-9805-a97f8d742249 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.884838121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511954884806568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=1332cde2-a2d1-4d39-9805-a97f8d742249 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.885452436Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=23ec86c0-3d03-4951-bfdf-88ba0b60d360 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.885527178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=23ec86c0-3d03-4951-bfdf-88ba0b60d360 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.885713623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eee04693d74189924b9622b39b08d0c1a82a39417920b95311f7e60595834201,PodSandboxId:f04ef7bd6f0a22b979f413b3c535fd53468c870473d463843ef95793417074ce,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510875378231620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: af0877b6-43de-4c64-b5ac-279fa3325551,},Annotations:map[string]string{io.kubernetes.container.hash: e9a12b27,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47cff327955c591f8e8f9d644ad6987fa073012ed055a8b8006a72ffb08c2be,PodSandboxId:ce277e6ba47cd520efeef710adb4892bcd0e2aeb73099383b9a829fbb0616f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1703510874302110639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mk9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487388f-a7b7-401e-9ce3-06fac16ddd47,},Annotations:map[string]string{io.kubernetes.container.hash: a0fe198d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf29569278accacdc63587055c7c4248270d1bf393c40fa449ac4b96f40bb0f1,PodSandboxId:b230f817f43edda50e77e7d96936601f75698b17da85fdc3672e565534e57b1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510873813284140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0d6c87f1-93ae-479b-ac0e-4623e326afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 9f8f673d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910a2a6af295b1b01f52fe18a975c267d9d105bf2eed5c4debe0d0731281c5ff,PodSandboxId:01599dd503c13b19393282a7db9edd5cbc647016900b78ba151dc284b2624654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1703510872533183297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vw9lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b7377f2-3ae6-4003-977d
-4eb3c7cd11f0,},Annotations:map[string]string{io.kubernetes.container.hash: e36b7973,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2abf03e37aac490974346ac98df0d557a7f99b8f18fa76dd29a068b9fd7fb6,PodSandboxId:da2644db835d20c701a5d61dbe793394c150b0fb9c40314bad7a93372ec157a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1703510864666002805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd98fe94865b5b85093069a662706570,},Annotations:map[string]string{io.ku
bernetes.container.hash: 107160ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af8d6cd59ab945dd2f728519f0a38639469b790ff75269c71e14d6e55212410,PodSandboxId:aa9954da2cb2ab43232ca5d8c0ffde30b97da93dd1114f70f858657cbd6d1909,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1703510863315990861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1a7d0e2b22b5770db35501a52f89ed,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2964ec56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ad453cbfd10d811941f7f5330a805c3db1e6551a186cf7fb6786d13851d6fc,PodSandboxId:f5a9d9ee3e96527f1bcfd109cefb4fd767a6091bd77b2e4cf05f05c85de07f20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1703510863194174320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.has
h: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fccd1ab3c39fefcb749e16ffc8605e841e7056f8171b0388a88d6f13ffcff2,PodSandboxId:4119b1ccf722cbd12566133e3817130461a3fd078c4734285f1fb190d73e3e5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1703510863175379403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io
.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=23ec86c0-3d03-4951-bfdf-88ba0b60d360 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.924269926Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=aec51811-fee1-45a9-9082-17014c6d95b0 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.924361988Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=aec51811-fee1-45a9-9082-17014c6d95b0 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.925233506Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1081d36c-b577-418c-930e-46ac0ac985bb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.925864122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511954925653105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=1081d36c-b577-418c-930e-46ac0ac985bb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.926572981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cb331172-ae54-4c21-8099-3157486b6907 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.926620726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cb331172-ae54-4c21-8099-3157486b6907 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.926890614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eee04693d74189924b9622b39b08d0c1a82a39417920b95311f7e60595834201,PodSandboxId:f04ef7bd6f0a22b979f413b3c535fd53468c870473d463843ef95793417074ce,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510875378231620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: af0877b6-43de-4c64-b5ac-279fa3325551,},Annotations:map[string]string{io.kubernetes.container.hash: e9a12b27,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47cff327955c591f8e8f9d644ad6987fa073012ed055a8b8006a72ffb08c2be,PodSandboxId:ce277e6ba47cd520efeef710adb4892bcd0e2aeb73099383b9a829fbb0616f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1703510874302110639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mk9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487388f-a7b7-401e-9ce3-06fac16ddd47,},Annotations:map[string]string{io.kubernetes.container.hash: a0fe198d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf29569278accacdc63587055c7c4248270d1bf393c40fa449ac4b96f40bb0f1,PodSandboxId:b230f817f43edda50e77e7d96936601f75698b17da85fdc3672e565534e57b1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510873813284140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0d6c87f1-93ae-479b-ac0e-4623e326afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 9f8f673d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910a2a6af295b1b01f52fe18a975c267d9d105bf2eed5c4debe0d0731281c5ff,PodSandboxId:01599dd503c13b19393282a7db9edd5cbc647016900b78ba151dc284b2624654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1703510872533183297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vw9lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b7377f2-3ae6-4003-977d
-4eb3c7cd11f0,},Annotations:map[string]string{io.kubernetes.container.hash: e36b7973,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2abf03e37aac490974346ac98df0d557a7f99b8f18fa76dd29a068b9fd7fb6,PodSandboxId:da2644db835d20c701a5d61dbe793394c150b0fb9c40314bad7a93372ec157a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1703510864666002805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd98fe94865b5b85093069a662706570,},Annotations:map[string]string{io.ku
bernetes.container.hash: 107160ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af8d6cd59ab945dd2f728519f0a38639469b790ff75269c71e14d6e55212410,PodSandboxId:aa9954da2cb2ab43232ca5d8c0ffde30b97da93dd1114f70f858657cbd6d1909,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1703510863315990861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1a7d0e2b22b5770db35501a52f89ed,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2964ec56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ad453cbfd10d811941f7f5330a805c3db1e6551a186cf7fb6786d13851d6fc,PodSandboxId:f5a9d9ee3e96527f1bcfd109cefb4fd767a6091bd77b2e4cf05f05c85de07f20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1703510863194174320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.has
h: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fccd1ab3c39fefcb749e16ffc8605e841e7056f8171b0388a88d6f13ffcff2,PodSandboxId:4119b1ccf722cbd12566133e3817130461a3fd078c4734285f1fb190d73e3e5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1703510863175379403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io
.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cb331172-ae54-4c21-8099-3157486b6907 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.964046565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=18c9df44-78f6-409b-ba24-9e43e94c5755 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.964134259Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=18c9df44-78f6-409b-ba24-9e43e94c5755 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.965208463Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=97358aa9-512c-4a19-a027-dda4c11f5d0c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.965581402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703511954965568781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=97358aa9-512c-4a19-a027-dda4c11f5d0c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.966280247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9e3dc098-fd42-4ce8-a4bc-28a6a310edbc name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.966329104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9e3dc098-fd42-4ce8-a4bc-28a6a310edbc name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:45:54 old-k8s-version-198979 crio[708]: time="2023-12-25 13:45:54.966515176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eee04693d74189924b9622b39b08d0c1a82a39417920b95311f7e60595834201,PodSandboxId:f04ef7bd6f0a22b979f413b3c535fd53468c870473d463843ef95793417074ce,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510875378231620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: af0877b6-43de-4c64-b5ac-279fa3325551,},Annotations:map[string]string{io.kubernetes.container.hash: e9a12b27,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47cff327955c591f8e8f9d644ad6987fa073012ed055a8b8006a72ffb08c2be,PodSandboxId:ce277e6ba47cd520efeef710adb4892bcd0e2aeb73099383b9a829fbb0616f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1703510874302110639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mk9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487388f-a7b7-401e-9ce3-06fac16ddd47,},Annotations:map[string]string{io.kubernetes.container.hash: a0fe198d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf29569278accacdc63587055c7c4248270d1bf393c40fa449ac4b96f40bb0f1,PodSandboxId:b230f817f43edda50e77e7d96936601f75698b17da85fdc3672e565534e57b1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510873813284140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0d6c87f1-93ae-479b-ac0e-4623e326afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 9f8f673d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910a2a6af295b1b01f52fe18a975c267d9d105bf2eed5c4debe0d0731281c5ff,PodSandboxId:01599dd503c13b19393282a7db9edd5cbc647016900b78ba151dc284b2624654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1703510872533183297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vw9lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b7377f2-3ae6-4003-977d
-4eb3c7cd11f0,},Annotations:map[string]string{io.kubernetes.container.hash: e36b7973,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2abf03e37aac490974346ac98df0d557a7f99b8f18fa76dd29a068b9fd7fb6,PodSandboxId:da2644db835d20c701a5d61dbe793394c150b0fb9c40314bad7a93372ec157a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1703510864666002805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd98fe94865b5b85093069a662706570,},Annotations:map[string]string{io.ku
bernetes.container.hash: 107160ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af8d6cd59ab945dd2f728519f0a38639469b790ff75269c71e14d6e55212410,PodSandboxId:aa9954da2cb2ab43232ca5d8c0ffde30b97da93dd1114f70f858657cbd6d1909,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1703510863315990861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1a7d0e2b22b5770db35501a52f89ed,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2964ec56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ad453cbfd10d811941f7f5330a805c3db1e6551a186cf7fb6786d13851d6fc,PodSandboxId:f5a9d9ee3e96527f1bcfd109cefb4fd767a6091bd77b2e4cf05f05c85de07f20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1703510863194174320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.has
h: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fccd1ab3c39fefcb749e16ffc8605e841e7056f8171b0388a88d6f13ffcff2,PodSandboxId:4119b1ccf722cbd12566133e3817130461a3fd078c4734285f1fb190d73e3e5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1703510863175379403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-198979,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io
.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9e3dc098-fd42-4ce8-a4bc-28a6a310edbc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	eee04693d7418       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   17 minutes ago      Running             busybox                   0                   f04ef7bd6f0a2       busybox
	b47cff327955c       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      18 minutes ago      Running             coredns                   0                   ce277e6ba47cd       coredns-5644d7b6d9-mk9jx
	cf29569278acc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       0                   b230f817f43ed       storage-provisioner
	910a2a6af295b       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      18 minutes ago      Running             kube-proxy                0                   01599dd503c13       kube-proxy-vw9lf
	8a2abf03e37aa       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      18 minutes ago      Running             etcd                      0                   da2644db835d2       etcd-old-k8s-version-198979
	0af8d6cd59ab9       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      18 minutes ago      Running             kube-apiserver            0                   aa9954da2cb2a       kube-apiserver-old-k8s-version-198979
	e4ad453cbfd10       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      18 minutes ago      Running             kube-scheduler            0                   f5a9d9ee3e965       kube-scheduler-old-k8s-version-198979
	90fccd1ab3c39       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      18 minutes ago      Running             kube-controller-manager   0                   4119b1ccf722c       kube-controller-manager-old-k8s-version-198979
	
	
	==> coredns [b47cff327955c591f8e8f9d644ad6987fa073012ed055a8b8006a72ffb08c2be] <==
	.:53
	2023-12-25T13:17:05.406Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2023-12-25T13:17:05.406Z [INFO] CoreDNS-1.6.2
	2023-12-25T13:17:05.406Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-12-25T13:17:05.417Z [INFO] 127.0.0.1:50006 - 47573 "HINFO IN 5597062525656395122.292789402761948928. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010323893s
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2023-12-25T13:27:54.726Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2023-12-25T13:27:54.726Z [INFO] CoreDNS-1.6.2
	2023-12-25T13:27:54.726Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-12-25T13:27:55.736Z [INFO] 127.0.0.1:47335 - 55245 "HINFO IN 6994226206877751198.4386127104992780867. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009589657s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-198979
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-198979
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=old-k8s-version-198979
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_25T13_16_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 13:16:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 13:45:20 +0000   Mon, 25 Dec 2023 13:16:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 13:45:20 +0000   Mon, 25 Dec 2023 13:16:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 13:45:20 +0000   Mon, 25 Dec 2023 13:16:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 13:45:20 +0000   Mon, 25 Dec 2023 13:28:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    old-k8s-version-198979
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 754d284c191d40dc9bd29b299bcd741b
	 System UUID:                754d284c-191d-40dc-9bd2-9b299bcd741b
	 Boot ID:                    642f28bc-a4e8-415d-9aee-5f3fcb175a25
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                coredns-5644d7b6d9-mk9jx                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                etcd-old-k8s-version-198979                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                kube-apiserver-old-k8s-version-198979             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                kube-controller-manager-old-k8s-version-198979    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-proxy-vw9lf                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                kube-scheduler-old-k8s-version-198979             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                metrics-server-74d5856cc6-2ppzp                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet, old-k8s-version-198979     Node old-k8s-version-198979 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet, old-k8s-version-198979     Node old-k8s-version-198979 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet, old-k8s-version-198979     Node old-k8s-version-198979 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kube-proxy, old-k8s-version-198979  Starting kube-proxy.
	  Normal  Starting                 18m                kubelet, old-k8s-version-198979     Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet, old-k8s-version-198979     Node old-k8s-version-198979 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x7 over 18m)  kubelet, old-k8s-version-198979     Node old-k8s-version-198979 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet, old-k8s-version-198979     Node old-k8s-version-198979 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet, old-k8s-version-198979     Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kube-proxy, old-k8s-version-198979  Starting kube-proxy.
	
	
	==> dmesg <==
	[Dec25 13:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec25 13:27] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.668638] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.144668] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.568037] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.541399] systemd-fstab-generator[631]: Ignoring "noauto" for root device
	[  +0.119358] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.168559] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.126129] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.264794] systemd-fstab-generator[691]: Ignoring "noauto" for root device
	[ +20.249626] systemd-fstab-generator[1026]: Ignoring "noauto" for root device
	[  +0.467144] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec25 13:28] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [8a2abf03e37aac490974346ac98df0d557a7f99b8f18fa76dd29a068b9fd7fb6] <==
	2023-12-25 13:27:44.791840 I | etcdserver: restarting member 1bfd5d64eb00b2d5 in cluster 7d06a36b1777ee5c at commit index 525
	2023-12-25 13:27:44.792015 I | raft: 1bfd5d64eb00b2d5 became follower at term 2
	2023-12-25 13:27:44.792047 I | raft: newRaft 1bfd5d64eb00b2d5 [peers: [], term: 2, commit: 525, applied: 0, lastindex: 525, lastterm: 2]
	2023-12-25 13:27:44.803880 W | auth: simple token is not cryptographically signed
	2023-12-25 13:27:44.806670 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-25 13:27:44.808452 I | etcdserver/membership: added member 1bfd5d64eb00b2d5 [https://192.168.39.186:2380] to cluster 7d06a36b1777ee5c
	2023-12-25 13:27:44.808545 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-25 13:27:44.808597 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-25 13:27:44.809081 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-25 13:27:44.809264 I | embed: listening for metrics on http://192.168.39.186:2381
	2023-12-25 13:27:44.809729 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-25 13:27:46.592440 I | raft: 1bfd5d64eb00b2d5 is starting a new election at term 2
	2023-12-25 13:27:46.592506 I | raft: 1bfd5d64eb00b2d5 became candidate at term 3
	2023-12-25 13:27:46.592520 I | raft: 1bfd5d64eb00b2d5 received MsgVoteResp from 1bfd5d64eb00b2d5 at term 3
	2023-12-25 13:27:46.592530 I | raft: 1bfd5d64eb00b2d5 became leader at term 3
	2023-12-25 13:27:46.592536 I | raft: raft.node: 1bfd5d64eb00b2d5 elected leader 1bfd5d64eb00b2d5 at term 3
	2023-12-25 13:27:46.594242 I | etcdserver: published {Name:old-k8s-version-198979 ClientURLs:[https://192.168.39.186:2379]} to cluster 7d06a36b1777ee5c
	2023-12-25 13:27:46.594453 I | embed: ready to serve client requests
	2023-12-25 13:27:46.596100 I | embed: serving client requests on 192.168.39.186:2379
	2023-12-25 13:27:46.596421 I | embed: ready to serve client requests
	2023-12-25 13:27:46.600201 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-25 13:37:46.626466 I | mvcc: store.index: compact 827
	2023-12-25 13:37:46.629953 I | mvcc: finished scheduled compaction at 827 (took 2.776841ms)
	2023-12-25 13:42:46.634313 I | mvcc: store.index: compact 1046
	2023-12-25 13:42:46.636521 I | mvcc: finished scheduled compaction at 1046 (took 1.604984ms)
	
	
	==> kernel <==
	 13:45:55 up 18 min,  0 users,  load average: 0.33, 0.18, 0.14
	Linux old-k8s-version-198979 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [0af8d6cd59ab945dd2f728519f0a38639469b790ff75269c71e14d6e55212410] <==
	I1225 13:38:50.958220       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1225 13:38:50.958413       1 handler_proxy.go:99] no RequestInfo found in the context
	E1225 13:38:50.958480       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:38:50.958517       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:40:50.959036       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1225 13:40:50.959166       1 handler_proxy.go:99] no RequestInfo found in the context
	E1225 13:40:50.959242       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:40:50.959253       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:42:50.961216       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1225 13:42:50.961326       1 handler_proxy.go:99] no RequestInfo found in the context
	E1225 13:42:50.961391       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:42:50.961398       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:43:50.961926       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1225 13:43:50.962055       1 handler_proxy.go:99] no RequestInfo found in the context
	E1225 13:43:50.962092       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:43:50.962099       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:45:50.962441       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1225 13:45:50.962581       1 handler_proxy.go:99] no RequestInfo found in the context
	E1225 13:45:50.962640       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:45:50.962647       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [90fccd1ab3c39fefcb749e16ffc8605e841e7056f8171b0388a88d6f13ffcff2] <==
	E1225 13:39:43.174815       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:39:53.061248       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:40:13.427914       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:40:25.063310       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:40:43.680405       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:40:57.066131       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:41:13.932694       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:41:29.068201       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:41:44.184989       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:42:01.070454       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:42:14.437373       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:42:33.072654       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:42:44.689674       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:43:05.074990       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:43:14.942156       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:43:37.077441       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:43:45.194396       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:44:09.080185       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:44:15.446899       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:44:41.081999       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:44:45.699066       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:45:13.084126       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:45:15.951289       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1225 13:45:45.086661       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1225 13:45:46.203306       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [910a2a6af295b1b01f52fe18a975c267d9d105bf2eed5c4debe0d0731281c5ff] <==
	W1225 13:17:04.361005       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1225 13:17:04.374480       1 node.go:135] Successfully retrieved node IP: 192.168.39.186
	I1225 13:17:04.374593       1 server_others.go:149] Using iptables Proxier.
	I1225 13:17:04.375513       1 server.go:529] Version: v1.16.0
	I1225 13:17:04.377030       1 config.go:313] Starting service config controller
	I1225 13:17:04.377157       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1225 13:17:04.377669       1 config.go:131] Starting endpoints config controller
	I1225 13:17:04.377729       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1225 13:17:04.478716       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1225 13:17:04.478904       1 shared_informer.go:204] Caches are synced for service config 
	W1225 13:27:53.145014       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1225 13:27:53.299225       1 node.go:135] Successfully retrieved node IP: 192.168.39.186
	I1225 13:27:53.299375       1 server_others.go:149] Using iptables Proxier.
	I1225 13:27:53.566891       1 server.go:529] Version: v1.16.0
	I1225 13:27:53.574132       1 config.go:313] Starting service config controller
	I1225 13:27:53.574208       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1225 13:27:53.574269       1 config.go:131] Starting endpoints config controller
	I1225 13:27:53.574282       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1225 13:27:53.677873       1 shared_informer.go:204] Caches are synced for service config 
	I1225 13:27:53.678132       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [e4ad453cbfd10d811941f7f5330a805c3db1e6551a186cf7fb6786d13851d6fc] <==
	E1225 13:16:42.654916       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1225 13:16:43.621845       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1225 13:16:43.637796       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1225 13:16:43.642275       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1225 13:16:43.643110       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1225 13:16:43.645101       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1225 13:16:43.646333       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1225 13:16:43.649605       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1225 13:16:43.649979       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1225 13:16:43.657201       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1225 13:16:43.657770       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1225 13:16:43.659806       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1225 13:17:02.500668       1 factory.go:585] pod is already present in the activeQ
	E1225 13:17:02.764901       1 factory.go:585] pod is already present in the activeQ
	I1225 13:27:44.079406       1 serving.go:319] Generated self-signed cert in-memory
	W1225 13:27:49.976727       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 13:27:49.977885       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 13:27:49.977975       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 13:27:49.977983       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 13:27:49.992426       1 server.go:143] Version: v1.16.0
	I1225 13:27:49.992529       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W1225 13:27:50.018072       1 authorization.go:47] Authorization is disabled
	W1225 13:27:50.018146       1 authentication.go:79] Authentication is disabled
	I1225 13:27:50.018195       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1225 13:27:50.018570       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	
	==> kubelet <==
	-- Journal begins at Mon 2023-12-25 13:27:08 UTC, ends at Mon 2023-12-25 13:45:55 UTC. --
	Dec 25 13:41:38 old-k8s-version-198979 kubelet[1032]: E1225 13:41:38.966955    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:41:50 old-k8s-version-198979 kubelet[1032]: E1225 13:41:50.966217    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:42:01 old-k8s-version-198979 kubelet[1032]: E1225 13:42:01.966310    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:42:12 old-k8s-version-198979 kubelet[1032]: E1225 13:42:12.966068    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:42:27 old-k8s-version-198979 kubelet[1032]: E1225 13:42:27.967340    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:42:38 old-k8s-version-198979 kubelet[1032]: E1225 13:42:38.966135    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:42:42 old-k8s-version-198979 kubelet[1032]: E1225 13:42:42.064497    1032 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 25 13:42:51 old-k8s-version-198979 kubelet[1032]: E1225 13:42:51.966489    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:43:05 old-k8s-version-198979 kubelet[1032]: E1225 13:43:05.966659    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:43:17 old-k8s-version-198979 kubelet[1032]: E1225 13:43:17.966384    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:43:30 old-k8s-version-198979 kubelet[1032]: E1225 13:43:30.966044    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:43:41 old-k8s-version-198979 kubelet[1032]: E1225 13:43:41.966505    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:43:56 old-k8s-version-198979 kubelet[1032]: E1225 13:43:56.965903    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:44:08 old-k8s-version-198979 kubelet[1032]: E1225 13:44:08.980297    1032 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 25 13:44:08 old-k8s-version-198979 kubelet[1032]: E1225 13:44:08.980421    1032 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 25 13:44:08 old-k8s-version-198979 kubelet[1032]: E1225 13:44:08.980479    1032 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 25 13:44:08 old-k8s-version-198979 kubelet[1032]: E1225 13:44:08.980508    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 25 13:44:19 old-k8s-version-198979 kubelet[1032]: E1225 13:44:19.969082    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:44:32 old-k8s-version-198979 kubelet[1032]: E1225 13:44:32.966630    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:44:46 old-k8s-version-198979 kubelet[1032]: E1225 13:44:46.966011    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:44:59 old-k8s-version-198979 kubelet[1032]: E1225 13:44:59.967878    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:45:12 old-k8s-version-198979 kubelet[1032]: E1225 13:45:12.966070    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:45:26 old-k8s-version-198979 kubelet[1032]: E1225 13:45:26.966514    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:45:41 old-k8s-version-198979 kubelet[1032]: E1225 13:45:41.967410    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 25 13:45:52 old-k8s-version-198979 kubelet[1032]: E1225 13:45:52.966305    1032 pod_workers.go:191] Error syncing pod 8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d ("metrics-server-74d5856cc6-2ppzp_kube-system(8ccc2881-b178-4edf-a4f2-8f4a0cfc0e5d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [cf29569278accacdc63587055c7c4248270d1bf393c40fa449ac4b96f40bb0f1] <==
	I1225 13:17:04.794227       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 13:17:04.819091       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 13:17:04.820680       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 13:17:04.877918       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 13:17:04.878541       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca479bec-c1b3-4241-884a-1a7f6f0c5197", APIVersion:"v1", ResourceVersion:"377", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-198979_75cdae0c-392d-4512-9725-249e1c30a133 became leader
	I1225 13:17:04.879326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-198979_75cdae0c-392d-4512-9725-249e1c30a133!
	I1225 13:17:04.980602       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-198979_75cdae0c-392d-4512-9725-249e1c30a133!
	I1225 13:27:53.953520       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 13:27:53.978630       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 13:27:53.979970       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 13:28:11.430677       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 13:28:11.431534       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca479bec-c1b3-4241-884a-1a7f6f0c5197", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-198979_2508eee5-db9a-4a7d-959e-f216c8af2c59 became leader
	I1225 13:28:11.431673       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-198979_2508eee5-db9a-4a7d-959e-f216c8af2c59!
	I1225 13:28:11.532560       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-198979_2508eee5-db9a-4a7d-959e-f216c8af2c59!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-198979 -n old-k8s-version-198979
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-198979 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-2ppzp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-198979 describe pod metrics-server-74d5856cc6-2ppzp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-198979 describe pod metrics-server-74d5856cc6-2ppzp: exit status 1 (97.07013ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-2ppzp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-198979 describe pod metrics-server-74d5856cc6-2ppzp: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (531.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (452.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-330063 -n no-preload-330063
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-25 13:47:50.623534664 +0000 UTC m=+5495.242011684
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-330063 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-330063 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.792µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-330063 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-330063 -n no-preload-330063
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-330063 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-330063 logs -n 25: (1.381185707s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-330063             | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-021022                              | cert-expiration-021022       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	| delete  | -p                                                     | disable-driver-mounts-246503 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	|         | disable-driver-mounts-246503                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:22 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-198979             | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-330063                  | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880612            | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-344803  | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880612                 | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-344803       | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC | 25 Dec 23 13:36 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:45 UTC | 25 Dec 23 13:45 UTC |
	| start   | -p newest-cni-058636 --memory=2200 --alsologtostderr   | newest-cni-058636            | jenkins | v1.32.0 | 25 Dec 23 13:45 UTC | 25 Dec 23 13:46 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-058636             | newest-cni-058636            | jenkins | v1.32.0 | 25 Dec 23 13:46 UTC | 25 Dec 23 13:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-058636                                   | newest-cni-058636            | jenkins | v1.32.0 | 25 Dec 23 13:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 13:45:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 13:45:57.955621 1488580 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:45:57.955774 1488580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:45:57.955785 1488580 out.go:309] Setting ErrFile to fd 2...
	I1225 13:45:57.955792 1488580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:45:57.956080 1488580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:45:57.956969 1488580 out.go:303] Setting JSON to false
	I1225 13:45:57.958391 1488580 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":160111,"bootTime":1703351847,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 13:45:57.958516 1488580 start.go:138] virtualization: kvm guest
	I1225 13:45:57.960877 1488580 out.go:177] * [newest-cni-058636] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 13:45:57.962501 1488580 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 13:45:57.963844 1488580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 13:45:57.962528 1488580 notify.go:220] Checking for updates...
	I1225 13:45:57.966634 1488580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:45:57.968187 1488580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:45:57.969523 1488580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 13:45:57.970783 1488580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 13:45:57.972549 1488580 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:45:57.972711 1488580 config.go:182] Loaded profile config "embed-certs-880612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:45:57.972850 1488580 config.go:182] Loaded profile config "no-preload-330063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:45:57.973154 1488580 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 13:45:58.014347 1488580 out.go:177] * Using the kvm2 driver based on user configuration
	I1225 13:45:58.015643 1488580 start.go:298] selected driver: kvm2
	I1225 13:45:58.015660 1488580 start.go:902] validating driver "kvm2" against <nil>
	I1225 13:45:58.015672 1488580 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 13:45:58.016529 1488580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:45:58.016642 1488580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 13:45:58.034189 1488580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 13:45:58.034273 1488580 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1225 13:45:58.034307 1488580 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1225 13:45:58.034636 1488580 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1225 13:45:58.034734 1488580 cni.go:84] Creating CNI manager for ""
	I1225 13:45:58.034753 1488580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:45:58.034768 1488580 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1225 13:45:58.034783 1488580 start_flags.go:323] config:
	{Name:newest-cni-058636 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-058636 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:45:58.034978 1488580 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:45:58.037225 1488580 out.go:177] * Starting control plane node newest-cni-058636 in cluster newest-cni-058636
	I1225 13:45:58.038645 1488580 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1225 13:45:58.038724 1488580 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1225 13:45:58.038742 1488580 cache.go:56] Caching tarball of preloaded images
	I1225 13:45:58.038861 1488580 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 13:45:58.038872 1488580 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1225 13:45:58.038971 1488580 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/config.json ...
	I1225 13:45:58.038989 1488580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/config.json: {Name:mkeff6f4c744b04b07645d2d4aa573cca2f7cf54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:45:58.039161 1488580 start.go:365] acquiring machines lock for newest-cni-058636: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:45:58.039203 1488580 start.go:369] acquired machines lock for "newest-cni-058636" in 21.521µs
	I1225 13:45:58.039229 1488580 start.go:93] Provisioning new machine with config: &{Name:newest-cni-058636 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-058636 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:45:58.039324 1488580 start.go:125] createHost starting for "" (driver="kvm2")
	I1225 13:45:58.041816 1488580 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1225 13:45:58.042050 1488580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:45:58.042116 1488580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:45:58.058072 1488580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
	I1225 13:45:58.058545 1488580 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:45:58.059264 1488580 main.go:141] libmachine: Using API Version  1
	I1225 13:45:58.059296 1488580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:45:58.059683 1488580 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:45:58.059944 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetMachineName
	I1225 13:45:58.060114 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:45:58.060343 1488580 start.go:159] libmachine.API.Create for "newest-cni-058636" (driver="kvm2")
	I1225 13:45:58.060382 1488580 client.go:168] LocalClient.Create starting
	I1225 13:45:58.060420 1488580 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem
	I1225 13:45:58.060477 1488580 main.go:141] libmachine: Decoding PEM data...
	I1225 13:45:58.060498 1488580 main.go:141] libmachine: Parsing certificate...
	I1225 13:45:58.060574 1488580 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem
	I1225 13:45:58.060614 1488580 main.go:141] libmachine: Decoding PEM data...
	I1225 13:45:58.060632 1488580 main.go:141] libmachine: Parsing certificate...
	I1225 13:45:58.060658 1488580 main.go:141] libmachine: Running pre-create checks...
	I1225 13:45:58.060670 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .PreCreateCheck
	I1225 13:45:58.061058 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetConfigRaw
	I1225 13:45:58.061523 1488580 main.go:141] libmachine: Creating machine...
	I1225 13:45:58.061539 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .Create
	I1225 13:45:58.061697 1488580 main.go:141] libmachine: (newest-cni-058636) Creating KVM machine...
	I1225 13:45:58.063111 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found existing default KVM network
	I1225 13:45:58.065558 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:45:58.065312 1488602 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147ef0}
	I1225 13:45:58.071804 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | trying to create private KVM network mk-newest-cni-058636 192.168.39.0/24...
	I1225 13:45:58.164009 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | private KVM network mk-newest-cni-058636 192.168.39.0/24 created
	I1225 13:45:58.164112 1488580 main.go:141] libmachine: (newest-cni-058636) Setting up store path in /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636 ...
	I1225 13:45:58.164141 1488580 main.go:141] libmachine: (newest-cni-058636) Building disk image from file:///home/jenkins/minikube-integration/17847-1442600/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I1225 13:45:58.164169 1488580 main.go:141] libmachine: (newest-cni-058636) Downloading /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17847-1442600/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1225 13:45:58.164233 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:45:58.163957 1488602 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:45:58.437657 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:45:58.437479 1488602 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636/id_rsa...
	I1225 13:45:58.582730 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:45:58.582569 1488602 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636/newest-cni-058636.rawdisk...
	I1225 13:45:58.582771 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Writing magic tar header
	I1225 13:45:58.582792 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Writing SSH key tar header
	I1225 13:45:58.582804 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:45:58.582739 1488602 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636 ...
	I1225 13:45:58.582871 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636
	I1225 13:45:58.584236 1488580 main.go:141] libmachine: (newest-cni-058636) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636 (perms=drwx------)
	I1225 13:45:58.584330 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines
	I1225 13:45:58.584349 1488580 main.go:141] libmachine: (newest-cni-058636) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube/machines (perms=drwxr-xr-x)
	I1225 13:45:58.584362 1488580 main.go:141] libmachine: (newest-cni-058636) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600/.minikube (perms=drwxr-xr-x)
	I1225 13:45:58.584376 1488580 main.go:141] libmachine: (newest-cni-058636) Setting executable bit set on /home/jenkins/minikube-integration/17847-1442600 (perms=drwxrwxr-x)
	I1225 13:45:58.584395 1488580 main.go:141] libmachine: (newest-cni-058636) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1225 13:45:58.584409 1488580 main.go:141] libmachine: (newest-cni-058636) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1225 13:45:58.584425 1488580 main.go:141] libmachine: (newest-cni-058636) Creating domain...
	I1225 13:45:58.584443 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:45:58.584474 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17847-1442600
	I1225 13:45:58.584496 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1225 13:45:58.584511 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Checking permissions on dir: /home/jenkins
	I1225 13:45:58.584529 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Checking permissions on dir: /home
	I1225 13:45:58.584549 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Skipping /home - not owner
	I1225 13:45:58.585693 1488580 main.go:141] libmachine: (newest-cni-058636) define libvirt domain using xml: 
	I1225 13:45:58.585715 1488580 main.go:141] libmachine: (newest-cni-058636) <domain type='kvm'>
	I1225 13:45:58.585726 1488580 main.go:141] libmachine: (newest-cni-058636)   <name>newest-cni-058636</name>
	I1225 13:45:58.585735 1488580 main.go:141] libmachine: (newest-cni-058636)   <memory unit='MiB'>2200</memory>
	I1225 13:45:58.585745 1488580 main.go:141] libmachine: (newest-cni-058636)   <vcpu>2</vcpu>
	I1225 13:45:58.585751 1488580 main.go:141] libmachine: (newest-cni-058636)   <features>
	I1225 13:45:58.585757 1488580 main.go:141] libmachine: (newest-cni-058636)     <acpi/>
	I1225 13:45:58.585775 1488580 main.go:141] libmachine: (newest-cni-058636)     <apic/>
	I1225 13:45:58.585784 1488580 main.go:141] libmachine: (newest-cni-058636)     <pae/>
	I1225 13:45:58.585799 1488580 main.go:141] libmachine: (newest-cni-058636)     
	I1225 13:45:58.585814 1488580 main.go:141] libmachine: (newest-cni-058636)   </features>
	I1225 13:45:58.585827 1488580 main.go:141] libmachine: (newest-cni-058636)   <cpu mode='host-passthrough'>
	I1225 13:45:58.585839 1488580 main.go:141] libmachine: (newest-cni-058636)   
	I1225 13:45:58.585847 1488580 main.go:141] libmachine: (newest-cni-058636)   </cpu>
	I1225 13:45:58.585860 1488580 main.go:141] libmachine: (newest-cni-058636)   <os>
	I1225 13:45:58.585886 1488580 main.go:141] libmachine: (newest-cni-058636)     <type>hvm</type>
	I1225 13:45:58.585906 1488580 main.go:141] libmachine: (newest-cni-058636)     <boot dev='cdrom'/>
	I1225 13:45:58.585918 1488580 main.go:141] libmachine: (newest-cni-058636)     <boot dev='hd'/>
	I1225 13:45:58.585931 1488580 main.go:141] libmachine: (newest-cni-058636)     <bootmenu enable='no'/>
	I1225 13:45:58.585944 1488580 main.go:141] libmachine: (newest-cni-058636)   </os>
	I1225 13:45:58.585954 1488580 main.go:141] libmachine: (newest-cni-058636)   <devices>
	I1225 13:45:58.585970 1488580 main.go:141] libmachine: (newest-cni-058636)     <disk type='file' device='cdrom'>
	I1225 13:45:58.585987 1488580 main.go:141] libmachine: (newest-cni-058636)       <source file='/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636/boot2docker.iso'/>
	I1225 13:45:58.586000 1488580 main.go:141] libmachine: (newest-cni-058636)       <target dev='hdc' bus='scsi'/>
	I1225 13:45:58.586012 1488580 main.go:141] libmachine: (newest-cni-058636)       <readonly/>
	I1225 13:45:58.586028 1488580 main.go:141] libmachine: (newest-cni-058636)     </disk>
	I1225 13:45:58.586042 1488580 main.go:141] libmachine: (newest-cni-058636)     <disk type='file' device='disk'>
	I1225 13:45:58.586054 1488580 main.go:141] libmachine: (newest-cni-058636)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1225 13:45:58.586066 1488580 main.go:141] libmachine: (newest-cni-058636)       <source file='/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636/newest-cni-058636.rawdisk'/>
	I1225 13:45:58.586073 1488580 main.go:141] libmachine: (newest-cni-058636)       <target dev='hda' bus='virtio'/>
	I1225 13:45:58.586080 1488580 main.go:141] libmachine: (newest-cni-058636)     </disk>
	I1225 13:45:58.586088 1488580 main.go:141] libmachine: (newest-cni-058636)     <interface type='network'>
	I1225 13:45:58.586101 1488580 main.go:141] libmachine: (newest-cni-058636)       <source network='mk-newest-cni-058636'/>
	I1225 13:45:58.586118 1488580 main.go:141] libmachine: (newest-cni-058636)       <model type='virtio'/>
	I1225 13:45:58.586131 1488580 main.go:141] libmachine: (newest-cni-058636)     </interface>
	I1225 13:45:58.586144 1488580 main.go:141] libmachine: (newest-cni-058636)     <interface type='network'>
	I1225 13:45:58.586161 1488580 main.go:141] libmachine: (newest-cni-058636)       <source network='default'/>
	I1225 13:45:58.586174 1488580 main.go:141] libmachine: (newest-cni-058636)       <model type='virtio'/>
	I1225 13:45:58.586181 1488580 main.go:141] libmachine: (newest-cni-058636)     </interface>
	I1225 13:45:58.586187 1488580 main.go:141] libmachine: (newest-cni-058636)     <serial type='pty'>
	I1225 13:45:58.586194 1488580 main.go:141] libmachine: (newest-cni-058636)       <target port='0'/>
	I1225 13:45:58.586203 1488580 main.go:141] libmachine: (newest-cni-058636)     </serial>
	I1225 13:45:58.586211 1488580 main.go:141] libmachine: (newest-cni-058636)     <console type='pty'>
	I1225 13:45:58.586220 1488580 main.go:141] libmachine: (newest-cni-058636)       <target type='serial' port='0'/>
	I1225 13:45:58.586227 1488580 main.go:141] libmachine: (newest-cni-058636)     </console>
	I1225 13:45:58.586233 1488580 main.go:141] libmachine: (newest-cni-058636)     <rng model='virtio'>
	I1225 13:45:58.586241 1488580 main.go:141] libmachine: (newest-cni-058636)       <backend model='random'>/dev/random</backend>
	I1225 13:45:58.586249 1488580 main.go:141] libmachine: (newest-cni-058636)     </rng>
	I1225 13:45:58.586256 1488580 main.go:141] libmachine: (newest-cni-058636)     
	I1225 13:45:58.586264 1488580 main.go:141] libmachine: (newest-cni-058636)     
	I1225 13:45:58.586272 1488580 main.go:141] libmachine: (newest-cni-058636)   </devices>
	I1225 13:45:58.586280 1488580 main.go:141] libmachine: (newest-cni-058636) </domain>
	I1225 13:45:58.586287 1488580 main.go:141] libmachine: (newest-cni-058636) 
	I1225 13:45:58.591091 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:d4:3d:49 in network default
	I1225 13:45:58.591626 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:45:58.591654 1488580 main.go:141] libmachine: (newest-cni-058636) Ensuring networks are active...
	I1225 13:45:58.592409 1488580 main.go:141] libmachine: (newest-cni-058636) Ensuring network default is active
	I1225 13:45:58.592755 1488580 main.go:141] libmachine: (newest-cni-058636) Ensuring network mk-newest-cni-058636 is active
	I1225 13:45:58.593351 1488580 main.go:141] libmachine: (newest-cni-058636) Getting domain xml...
	I1225 13:45:58.594186 1488580 main.go:141] libmachine: (newest-cni-058636) Creating domain...
	I1225 13:46:00.148708 1488580 main.go:141] libmachine: (newest-cni-058636) Waiting to get IP...
	I1225 13:46:00.149467 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:00.150041 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:00.150115 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:00.150037 1488602 retry.go:31] will retry after 194.656189ms: waiting for machine to come up
	I1225 13:46:00.346748 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:00.347212 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:00.347247 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:00.347154 1488602 retry.go:31] will retry after 275.732041ms: waiting for machine to come up
	I1225 13:46:00.624738 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:00.625246 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:00.625271 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:00.625196 1488602 retry.go:31] will retry after 423.298774ms: waiting for machine to come up
	I1225 13:46:01.049747 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:01.050199 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:01.050234 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:01.050146 1488602 retry.go:31] will retry after 422.759997ms: waiting for machine to come up
	I1225 13:46:01.474860 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:01.475372 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:01.475403 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:01.475329 1488602 retry.go:31] will retry after 556.855878ms: waiting for machine to come up
	I1225 13:46:02.034274 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:02.034832 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:02.034858 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:02.034779 1488602 retry.go:31] will retry after 692.389615ms: waiting for machine to come up
	I1225 13:46:02.728599 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:02.729033 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:02.729069 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:02.728987 1488602 retry.go:31] will retry after 1.075088702s: waiting for machine to come up
	I1225 13:46:03.805922 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:03.806429 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:03.806519 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:03.806392 1488602 retry.go:31] will retry after 1.292344937s: waiting for machine to come up
	I1225 13:46:05.099985 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:05.100568 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:05.100593 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:05.100494 1488602 retry.go:31] will retry after 1.29688503s: waiting for machine to come up
	I1225 13:46:06.398517 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:06.399167 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:06.399197 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:06.399094 1488602 retry.go:31] will retry after 1.624135199s: waiting for machine to come up
	I1225 13:46:08.024477 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:08.025019 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:08.025056 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:08.024936 1488602 retry.go:31] will retry after 2.616581339s: waiting for machine to come up
	I1225 13:46:10.644457 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:10.645062 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:10.645093 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:10.645001 1488602 retry.go:31] will retry after 2.184913913s: waiting for machine to come up
	I1225 13:46:12.831594 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:12.832032 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:12.832067 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:12.831965 1488602 retry.go:31] will retry after 3.889212385s: waiting for machine to come up
	I1225 13:46:16.723994 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:16.724539 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find current IP address of domain newest-cni-058636 in network mk-newest-cni-058636
	I1225 13:46:16.724568 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | I1225 13:46:16.724485 1488602 retry.go:31] will retry after 3.497104844s: waiting for machine to come up
	I1225 13:46:20.223679 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.224202 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has current primary IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.224232 1488580 main.go:141] libmachine: (newest-cni-058636) Found IP for machine: 192.168.39.39
	I1225 13:46:20.224303 1488580 main.go:141] libmachine: (newest-cni-058636) Reserving static IP address...
	I1225 13:46:20.224582 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | unable to find host DHCP lease matching {name: "newest-cni-058636", mac: "52:54:00:9b:2d:e4", ip: "192.168.39.39"} in network mk-newest-cni-058636
	I1225 13:46:20.322680 1488580 main.go:141] libmachine: (newest-cni-058636) Reserved static IP address: 192.168.39.39
	I1225 13:46:20.322718 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Getting to WaitForSSH function...
	I1225 13:46:20.322728 1488580 main.go:141] libmachine: (newest-cni-058636) Waiting for SSH to be available...
	I1225 13:46:20.325950 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.326388 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:20.326423 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.326604 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Using SSH client type: external
	I1225 13:46:20.326631 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Using SSH private key: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636/id_rsa (-rw-------)
	I1225 13:46:20.326724 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1225 13:46:20.326768 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | About to run SSH command:
	I1225 13:46:20.326790 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | exit 0
	I1225 13:46:20.419328 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | SSH cmd err, output: <nil>: 
	I1225 13:46:20.419652 1488580 main.go:141] libmachine: (newest-cni-058636) KVM machine creation complete!
	I1225 13:46:20.419958 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetConfigRaw
	I1225 13:46:20.420608 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:46:20.420902 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:46:20.421092 1488580 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1225 13:46:20.421116 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetState
	I1225 13:46:20.422644 1488580 main.go:141] libmachine: Detecting operating system of created instance...
	I1225 13:46:20.422663 1488580 main.go:141] libmachine: Waiting for SSH to be available...
	I1225 13:46:20.422672 1488580 main.go:141] libmachine: Getting to WaitForSSH function...
	I1225 13:46:20.422683 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:20.425341 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.425830 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:20.425876 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.425995 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHPort
	I1225 13:46:20.426185 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:20.426409 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:20.426569 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHUsername
	I1225 13:46:20.426802 1488580 main.go:141] libmachine: Using SSH client type: native
	I1225 13:46:20.427247 1488580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1225 13:46:20.427263 1488580 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1225 13:46:20.549914 1488580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:46:20.549945 1488580 main.go:141] libmachine: Detecting the provisioner...
	I1225 13:46:20.549956 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:20.552826 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.553236 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:20.553262 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.553411 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHPort
	I1225 13:46:20.553605 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:20.553763 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:20.553901 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHUsername
	I1225 13:46:20.554058 1488580 main.go:141] libmachine: Using SSH client type: native
	I1225 13:46:20.554392 1488580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1225 13:46:20.554404 1488580 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1225 13:46:20.679764 1488580 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1225 13:46:20.679936 1488580 main.go:141] libmachine: found compatible host: buildroot
	I1225 13:46:20.679957 1488580 main.go:141] libmachine: Provisioning with buildroot...
	I1225 13:46:20.679967 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetMachineName
	I1225 13:46:20.680274 1488580 buildroot.go:166] provisioning hostname "newest-cni-058636"
	I1225 13:46:20.680311 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetMachineName
	I1225 13:46:20.680513 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:20.683343 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.683744 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:20.683787 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.684056 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHPort
	I1225 13:46:20.684274 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:20.684480 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:20.684680 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHUsername
	I1225 13:46:20.684893 1488580 main.go:141] libmachine: Using SSH client type: native
	I1225 13:46:20.685205 1488580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1225 13:46:20.685217 1488580 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-058636 && echo "newest-cni-058636" | sudo tee /etc/hostname
	I1225 13:46:20.820729 1488580 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-058636
	
	I1225 13:46:20.820761 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:20.823889 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.824340 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:20.824366 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.824554 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHPort
	I1225 13:46:20.824786 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:20.824979 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:20.825153 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHUsername
	I1225 13:46:20.825351 1488580 main.go:141] libmachine: Using SSH client type: native
	I1225 13:46:20.825697 1488580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1225 13:46:20.825723 1488580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-058636' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-058636/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-058636' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 13:46:20.959419 1488580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1225 13:46:20.959457 1488580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17847-1442600/.minikube CaCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17847-1442600/.minikube}
	I1225 13:46:20.959504 1488580 buildroot.go:174] setting up certificates
	I1225 13:46:20.959521 1488580 provision.go:83] configureAuth start
	I1225 13:46:20.959540 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetMachineName
	I1225 13:46:20.959894 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetIP
	I1225 13:46:20.962523 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.962959 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:20.962990 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.963130 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:20.965593 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.966039 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:20.966079 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:20.966280 1488580 provision.go:138] copyHostCerts
	I1225 13:46:20.966368 1488580 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem, removing ...
	I1225 13:46:20.966391 1488580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem
	I1225 13:46:20.966504 1488580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.pem (1078 bytes)
	I1225 13:46:20.966663 1488580 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem, removing ...
	I1225 13:46:20.966677 1488580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem
	I1225 13:46:20.966725 1488580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/cert.pem (1123 bytes)
	I1225 13:46:20.966825 1488580 exec_runner.go:144] found /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem, removing ...
	I1225 13:46:20.966836 1488580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem
	I1225 13:46:20.966873 1488580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17847-1442600/.minikube/key.pem (1675 bytes)
	I1225 13:46:20.966962 1488580 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem org=jenkins.newest-cni-058636 san=[192.168.39.39 192.168.39.39 localhost 127.0.0.1 minikube newest-cni-058636]
	I1225 13:46:21.174238 1488580 provision.go:172] copyRemoteCerts
	I1225 13:46:21.174299 1488580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 13:46:21.174325 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:21.177349 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.177760 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:21.177805 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.178020 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHPort
	I1225 13:46:21.178232 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:21.178389 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHUsername
	I1225 13:46:21.178576 1488580 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636/id_rsa Username:docker}
	I1225 13:46:21.268025 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1225 13:46:21.292769 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 13:46:21.318504 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 13:46:21.343183 1488580 provision.go:86] duration metric: configureAuth took 383.640994ms
	I1225 13:46:21.343217 1488580 buildroot.go:189] setting minikube options for container-runtime
	I1225 13:46:21.343499 1488580 config.go:182] Loaded profile config "newest-cni-058636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:46:21.343597 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:21.346308 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.346698 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:21.346733 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.346846 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHPort
	I1225 13:46:21.347076 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:21.347260 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:21.347401 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHUsername
	I1225 13:46:21.347580 1488580 main.go:141] libmachine: Using SSH client type: native
	I1225 13:46:21.348028 1488580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1225 13:46:21.348052 1488580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 13:46:21.679040 1488580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 13:46:21.679109 1488580 main.go:141] libmachine: Checking connection to Docker...
	I1225 13:46:21.679123 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetURL
	I1225 13:46:21.680400 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Using libvirt version 6000000
	I1225 13:46:21.682920 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.683289 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:21.683322 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.683462 1488580 main.go:141] libmachine: Docker is up and running!
	I1225 13:46:21.683476 1488580 main.go:141] libmachine: Reticulating splines...
	I1225 13:46:21.683484 1488580 client.go:171] LocalClient.Create took 23.623091267s
	I1225 13:46:21.683507 1488580 start.go:167] duration metric: libmachine.API.Create for "newest-cni-058636" took 23.6231736s
	I1225 13:46:21.683522 1488580 start.go:300] post-start starting for "newest-cni-058636" (driver="kvm2")
	I1225 13:46:21.683537 1488580 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 13:46:21.683560 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:46:21.683847 1488580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 13:46:21.683872 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:21.686269 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.686700 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:21.686726 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.686887 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHPort
	I1225 13:46:21.687076 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:21.687236 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHUsername
	I1225 13:46:21.687412 1488580 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636/id_rsa Username:docker}
	I1225 13:46:21.782132 1488580 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 13:46:21.786699 1488580 info.go:137] Remote host: Buildroot 2021.02.12
	I1225 13:46:21.786727 1488580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/addons for local assets ...
	I1225 13:46:21.786801 1488580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17847-1442600/.minikube/files for local assets ...
	I1225 13:46:21.786885 1488580 filesync.go:149] local asset: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem -> 14497972.pem in /etc/ssl/certs
	I1225 13:46:21.787000 1488580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 13:46:21.797104 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:46:21.822207 1488580 start.go:303] post-start completed in 138.658586ms
	I1225 13:46:21.822319 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetConfigRaw
	I1225 13:46:21.823675 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetIP
	I1225 13:46:21.827364 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.827797 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:21.827828 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.828129 1488580 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/config.json ...
	I1225 13:46:21.828366 1488580 start.go:128] duration metric: createHost completed in 23.789021702s
	I1225 13:46:21.828429 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:21.830934 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.831334 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:21.831364 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.831562 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHPort
	I1225 13:46:21.831811 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:21.832009 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:21.832226 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHUsername
	I1225 13:46:21.832464 1488580 main.go:141] libmachine: Using SSH client type: native
	I1225 13:46:21.832831 1488580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1225 13:46:21.832850 1488580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1225 13:46:21.963567 1488580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703511981.940212619
	
	I1225 13:46:21.963598 1488580 fix.go:206] guest clock: 1703511981.940212619
	I1225 13:46:21.963607 1488580 fix.go:219] Guest: 2023-12-25 13:46:21.940212619 +0000 UTC Remote: 2023-12-25 13:46:21.82840962 +0000 UTC m=+23.937137207 (delta=111.802999ms)
	I1225 13:46:21.963629 1488580 fix.go:190] guest clock delta is within tolerance: 111.802999ms
	I1225 13:46:21.963634 1488580 start.go:83] releasing machines lock for "newest-cni-058636", held for 23.924422104s
	I1225 13:46:21.963660 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:46:21.963976 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetIP
	I1225 13:46:21.966970 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.967421 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:21.967465 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.967672 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:46:21.968306 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:46:21.968507 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:46:21.968631 1488580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 13:46:21.968692 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:21.968730 1488580 ssh_runner.go:195] Run: cat /version.json
	I1225 13:46:21.968752 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:21.971514 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.971558 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.971869 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:21.971928 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.971961 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:21.971987 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:21.972071 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHPort
	I1225 13:46:21.972191 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHPort
	I1225 13:46:21.972280 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:21.972390 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:21.972455 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHUsername
	I1225 13:46:21.972507 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHUsername
	I1225 13:46:21.972578 1488580 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636/id_rsa Username:docker}
	I1225 13:46:21.972621 1488580 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636/id_rsa Username:docker}
	I1225 13:46:22.081662 1488580 ssh_runner.go:195] Run: systemctl --version
	I1225 13:46:22.088373 1488580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 13:46:22.253276 1488580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 13:46:22.259933 1488580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 13:46:22.260029 1488580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 13:46:22.277395 1488580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 13:46:22.277461 1488580 start.go:475] detecting cgroup driver to use...
	I1225 13:46:22.277590 1488580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 13:46:22.294587 1488580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 13:46:22.308532 1488580 docker.go:203] disabling cri-docker service (if available) ...
	I1225 13:46:22.308618 1488580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 13:46:22.322964 1488580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 13:46:22.337445 1488580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 13:46:22.449011 1488580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 13:46:22.571466 1488580 docker.go:219] disabling docker service ...
	I1225 13:46:22.571559 1488580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 13:46:22.585880 1488580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 13:46:22.599343 1488580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 13:46:22.714482 1488580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 13:46:22.826263 1488580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 13:46:22.840027 1488580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 13:46:22.858942 1488580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 13:46:22.859016 1488580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:46:22.870132 1488580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1225 13:46:22.870219 1488580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:46:22.881000 1488580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:46:22.892313 1488580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 13:46:22.902621 1488580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 13:46:22.913198 1488580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 13:46:22.924881 1488580 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 13:46:22.924964 1488580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 13:46:22.940602 1488580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 13:46:22.951180 1488580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 13:46:23.081659 1488580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 13:46:23.261454 1488580 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 13:46:23.261532 1488580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 13:46:23.269984 1488580 start.go:543] Will wait 60s for crictl version
	I1225 13:46:23.270077 1488580 ssh_runner.go:195] Run: which crictl
	I1225 13:46:23.274657 1488580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1225 13:46:23.316892 1488580 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1225 13:46:23.317006 1488580 ssh_runner.go:195] Run: crio --version
	I1225 13:46:23.362488 1488580 ssh_runner.go:195] Run: crio --version
	I1225 13:46:23.411349 1488580 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1225 13:46:23.413029 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetIP
	I1225 13:46:23.415920 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:23.416269 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:23.416302 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:23.416541 1488580 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1225 13:46:23.421467 1488580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:46:23.434500 1488580 localpath.go:92] copying /home/jenkins/minikube-integration/17847-1442600/.minikube/client.crt -> /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/client.crt
	I1225 13:46:23.434688 1488580 localpath.go:117] copying /home/jenkins/minikube-integration/17847-1442600/.minikube/client.key -> /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/client.key
	I1225 13:46:23.437052 1488580 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1225 13:46:23.438586 1488580 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1225 13:46:23.438676 1488580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:46:23.473339 1488580 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1225 13:46:23.473418 1488580 ssh_runner.go:195] Run: which lz4
	I1225 13:46:23.477857 1488580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1225 13:46:23.483149 1488580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1225 13:46:23.483188 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401795125 bytes)
	I1225 13:46:25.105634 1488580 crio.go:444] Took 1.627827 seconds to copy over tarball
	I1225 13:46:25.105713 1488580 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1225 13:46:28.093580 1488580 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.987842956s)
	I1225 13:46:28.093615 1488580 crio.go:451] Took 2.987941 seconds to extract the tarball
	I1225 13:46:28.093628 1488580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1225 13:46:28.130663 1488580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 13:46:28.222885 1488580 crio.go:496] all images are preloaded for cri-o runtime.
	I1225 13:46:28.222912 1488580 cache_images.go:84] Images are preloaded, skipping loading
	I1225 13:46:28.222996 1488580 ssh_runner.go:195] Run: crio config
	I1225 13:46:28.285182 1488580 cni.go:84] Creating CNI manager for ""
	I1225 13:46:28.285207 1488580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:46:28.285265 1488580 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1225 13:46:28.285288 1488580 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.39 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-058636 NodeName:newest-cni-058636 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.39.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 13:46:28.285457 1488580 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-058636"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 13:46:28.285569 1488580 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-058636 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-058636 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1225 13:46:28.285657 1488580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1225 13:46:28.296667 1488580 binaries.go:44] Found k8s binaries, skipping transfer
	I1225 13:46:28.296775 1488580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 13:46:28.307057 1488580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (418 bytes)
	I1225 13:46:28.325599 1488580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1225 13:46:28.343056 1488580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1225 13:46:28.361092 1488580 ssh_runner.go:195] Run: grep 192.168.39.39	control-plane.minikube.internal$ /etc/hosts
	I1225 13:46:28.365472 1488580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 13:46:28.379521 1488580 certs.go:56] Setting up /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636 for IP: 192.168.39.39
	I1225 13:46:28.379560 1488580 certs.go:190] acquiring lock for shared ca certs: {Name:mkdff45cf422f4195d2e2c19bb47efebadd55a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:46:28.379760 1488580 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key
	I1225 13:46:28.379814 1488580 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key
	I1225 13:46:28.379941 1488580 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/client.key
	I1225 13:46:28.379972 1488580 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/apiserver.key.365ed9e3
	I1225 13:46:28.379988 1488580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/apiserver.crt.365ed9e3 with IP's: [192.168.39.39 10.96.0.1 127.0.0.1 10.0.0.1]
	I1225 13:46:28.631955 1488580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/apiserver.crt.365ed9e3 ...
	I1225 13:46:28.632002 1488580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/apiserver.crt.365ed9e3: {Name:mkf23ae73d1576f25cee62fe8e4c13e0046ff8a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:46:28.632247 1488580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/apiserver.key.365ed9e3 ...
	I1225 13:46:28.632268 1488580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/apiserver.key.365ed9e3: {Name:mkd65e7f655297c90c3b0601fa9dd38dada9476f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:46:28.632356 1488580 certs.go:337] copying /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/apiserver.crt.365ed9e3 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/apiserver.crt
	I1225 13:46:28.632431 1488580 certs.go:341] copying /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/apiserver.key.365ed9e3 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/apiserver.key
	I1225 13:46:28.632490 1488580 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/proxy-client.key
	I1225 13:46:28.632506 1488580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/proxy-client.crt with IP's: []
	I1225 13:46:28.771986 1488580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/proxy-client.crt ...
	I1225 13:46:28.772025 1488580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/proxy-client.crt: {Name:mk5fb51a68d185fc16c4fec428898c03cabbe0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:46:28.772234 1488580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/proxy-client.key ...
	I1225 13:46:28.772255 1488580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/proxy-client.key: {Name:mk6b7a18cefbf0ac9612bdefb852fea152681c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:46:28.772531 1488580 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem (1338 bytes)
	W1225 13:46:28.772595 1488580 certs.go:433] ignoring /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797_empty.pem, impossibly tiny 0 bytes
	I1225 13:46:28.772619 1488580 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 13:46:28.772657 1488580 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/ca.pem (1078 bytes)
	I1225 13:46:28.772695 1488580 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/cert.pem (1123 bytes)
	I1225 13:46:28.772736 1488580 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/certs/key.pem (1675 bytes)
	I1225 13:46:28.772799 1488580 certs.go:437] found cert: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem (1708 bytes)
	I1225 13:46:28.773533 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1225 13:46:28.802172 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 13:46:28.828819 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 13:46:28.854159 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 13:46:28.882676 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 13:46:28.909823 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1225 13:46:28.936305 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 13:46:28.961323 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 13:46:28.987450 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/certs/1449797.pem --> /usr/share/ca-certificates/1449797.pem (1338 bytes)
	I1225 13:46:29.013773 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/ssl/certs/14497972.pem --> /usr/share/ca-certificates/14497972.pem (1708 bytes)
	I1225 13:46:29.038570 1488580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 13:46:29.065933 1488580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1225 13:46:29.084620 1488580 ssh_runner.go:195] Run: openssl version
	I1225 13:46:29.091310 1488580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1225 13:46:29.102854 1488580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:46:29.108099 1488580 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 25 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:46:29.108166 1488580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 13:46:29.114316 1488580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1225 13:46:29.124552 1488580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1449797.pem && ln -fs /usr/share/ca-certificates/1449797.pem /etc/ssl/certs/1449797.pem"
	I1225 13:46:29.134799 1488580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1449797.pem
	I1225 13:46:29.139720 1488580 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 25 12:25 /usr/share/ca-certificates/1449797.pem
	I1225 13:46:29.139798 1488580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1449797.pem
	I1225 13:46:29.145574 1488580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1449797.pem /etc/ssl/certs/51391683.0"
	I1225 13:46:29.156639 1488580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14497972.pem && ln -fs /usr/share/ca-certificates/14497972.pem /etc/ssl/certs/14497972.pem"
	I1225 13:46:29.168291 1488580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14497972.pem
	I1225 13:46:29.174301 1488580 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 25 12:25 /usr/share/ca-certificates/14497972.pem
	I1225 13:46:29.174372 1488580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14497972.pem
	I1225 13:46:29.180927 1488580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14497972.pem /etc/ssl/certs/3ec20f2e.0"
	I1225 13:46:29.191514 1488580 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1225 13:46:29.196162 1488580 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1225 13:46:29.196252 1488580 kubeadm.go:404] StartCluster: {Name:newest-cni-058636 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-058636 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:46:29.196330 1488580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 13:46:29.196391 1488580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 13:46:29.235247 1488580 cri.go:89] found id: ""
	I1225 13:46:29.235332 1488580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 13:46:29.246035 1488580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 13:46:29.256252 1488580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 13:46:29.265909 1488580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 13:46:29.265974 1488580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1225 13:46:29.393035 1488580 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1225 13:46:29.393105 1488580 kubeadm.go:322] [preflight] Running pre-flight checks
	I1225 13:46:29.634851 1488580 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 13:46:29.635035 1488580 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 13:46:29.635184 1488580 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1225 13:46:29.887396 1488580 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 13:46:30.023942 1488580 out.go:204]   - Generating certificates and keys ...
	I1225 13:46:30.024108 1488580 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1225 13:46:30.024207 1488580 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1225 13:46:30.079750 1488580 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1225 13:46:30.268138 1488580 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1225 13:46:30.598303 1488580 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1225 13:46:30.783530 1488580 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1225 13:46:30.941386 1488580 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1225 13:46:30.941669 1488580 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-058636] and IPs [192.168.39.39 127.0.0.1 ::1]
	I1225 13:46:31.146406 1488580 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1225 13:46:31.146858 1488580 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-058636] and IPs [192.168.39.39 127.0.0.1 ::1]
	I1225 13:46:31.364798 1488580 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1225 13:46:31.421152 1488580 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1225 13:46:31.564897 1488580 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1225 13:46:31.565233 1488580 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 13:46:31.704050 1488580 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 13:46:31.965331 1488580 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1225 13:46:32.222162 1488580 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 13:46:32.475022 1488580 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 13:46:32.771231 1488580 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 13:46:32.772128 1488580 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 13:46:32.775460 1488580 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 13:46:32.777421 1488580 out.go:204]   - Booting up control plane ...
	I1225 13:46:32.777528 1488580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 13:46:32.777627 1488580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 13:46:32.777702 1488580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 13:46:32.794384 1488580 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 13:46:32.795389 1488580 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 13:46:32.796186 1488580 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1225 13:46:32.965437 1488580 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1225 13:46:41.469407 1488580 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506764 seconds
	I1225 13:46:41.495085 1488580 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 13:46:41.518071 1488580 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 13:46:42.070803 1488580 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 13:46:42.071053 1488580 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-058636 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 13:46:42.588060 1488580 kubeadm.go:322] [bootstrap-token] Using token: 8hyzbl.uuceetxlupbfut2e
	I1225 13:46:42.589685 1488580 out.go:204]   - Configuring RBAC rules ...
	I1225 13:46:42.589832 1488580 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 13:46:42.595962 1488580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 13:46:42.604641 1488580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 13:46:42.611180 1488580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 13:46:42.615257 1488580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 13:46:42.624061 1488580 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 13:46:42.645255 1488580 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 13:46:42.897807 1488580 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1225 13:46:43.003716 1488580 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1225 13:46:43.004581 1488580 kubeadm.go:322] 
	I1225 13:46:43.004679 1488580 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1225 13:46:43.004699 1488580 kubeadm.go:322] 
	I1225 13:46:43.004781 1488580 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1225 13:46:43.004789 1488580 kubeadm.go:322] 
	I1225 13:46:43.004814 1488580 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1225 13:46:43.004882 1488580 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 13:46:43.004958 1488580 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 13:46:43.004967 1488580 kubeadm.go:322] 
	I1225 13:46:43.005064 1488580 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1225 13:46:43.005076 1488580 kubeadm.go:322] 
	I1225 13:46:43.005147 1488580 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 13:46:43.005160 1488580 kubeadm.go:322] 
	I1225 13:46:43.005224 1488580 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1225 13:46:43.005325 1488580 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 13:46:43.005436 1488580 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 13:46:43.005470 1488580 kubeadm.go:322] 
	I1225 13:46:43.005615 1488580 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 13:46:43.005725 1488580 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1225 13:46:43.005736 1488580 kubeadm.go:322] 
	I1225 13:46:43.005869 1488580 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8hyzbl.uuceetxlupbfut2e \
	I1225 13:46:43.006001 1488580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 \
	I1225 13:46:43.006031 1488580 kubeadm.go:322] 	--control-plane 
	I1225 13:46:43.006062 1488580 kubeadm.go:322] 
	I1225 13:46:43.006194 1488580 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1225 13:46:43.006203 1488580 kubeadm.go:322] 
	I1225 13:46:43.006330 1488580 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8hyzbl.uuceetxlupbfut2e \
	I1225 13:46:43.006474 1488580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:84a4ddb0dd05bb92bf3a371772ab07e0ff4c5e55744fd715c6e9a25592893459 
	I1225 13:46:43.007151 1488580 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 13:46:43.007197 1488580 cni.go:84] Creating CNI manager for ""
	I1225 13:46:43.007209 1488580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:46:43.010210 1488580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1225 13:46:43.011787 1488580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1225 13:46:43.051894 1488580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1225 13:46:43.091993 1488580 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 13:46:43.092068 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:43.092119 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da minikube.k8s.io/name=newest-cni-058636 minikube.k8s.io/updated_at=2023_12_25T13_46_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:43.408911 1488580 ops.go:34] apiserver oom_adj: -16
	I1225 13:46:43.442653 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:43.942683 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:44.442892 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:44.943301 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:45.443485 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:45.943674 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:46.442611 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:46.943067 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:47.443368 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:47.943515 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:48.443678 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:48.943146 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:49.442658 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:49.943512 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:50.443548 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:50.943671 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:51.443549 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:51.943071 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:52.443628 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:52.943163 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:53.442622 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:53.942780 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:54.442619 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:54.943112 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:55.443660 1488580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 13:46:55.567587 1488580 kubeadm.go:1088] duration metric: took 12.475582179s to wait for elevateKubeSystemPrivileges.
	I1225 13:46:55.567637 1488580 kubeadm.go:406] StartCluster complete in 26.371396011s
	I1225 13:46:55.567676 1488580 settings.go:142] acquiring lock: {Name:mk590cb5bd4b33bede2d004fbcc44001bca7c8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:46:55.567776 1488580 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:46:55.570614 1488580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/kubeconfig: {Name:mk09ff27fb5cb7f1bfa92907edbc1c823418bc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:46:55.571022 1488580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 13:46:55.571054 1488580 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1225 13:46:55.571154 1488580 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-058636"
	I1225 13:46:55.571166 1488580 addons.go:69] Setting default-storageclass=true in profile "newest-cni-058636"
	I1225 13:46:55.571204 1488580 addons.go:237] Setting addon storage-provisioner=true in "newest-cni-058636"
	I1225 13:46:55.571263 1488580 host.go:66] Checking if "newest-cni-058636" exists ...
	I1225 13:46:55.571205 1488580 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-058636"
	I1225 13:46:55.571318 1488580 config.go:182] Loaded profile config "newest-cni-058636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:46:55.571752 1488580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:46:55.571803 1488580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:46:55.571806 1488580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:46:55.571841 1488580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:46:55.589971 1488580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45413
	I1225 13:46:55.590503 1488580 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:46:55.590764 1488580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33939
	I1225 13:46:55.591097 1488580 main.go:141] libmachine: Using API Version  1
	I1225 13:46:55.591124 1488580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:46:55.591344 1488580 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:46:55.591588 1488580 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:46:55.591944 1488580 main.go:141] libmachine: Using API Version  1
	I1225 13:46:55.591972 1488580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:46:55.592157 1488580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:46:55.592215 1488580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:46:55.592371 1488580 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:46:55.592600 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetState
	I1225 13:46:55.596329 1488580 addons.go:237] Setting addon default-storageclass=true in "newest-cni-058636"
	I1225 13:46:55.596388 1488580 host.go:66] Checking if "newest-cni-058636" exists ...
	I1225 13:46:55.596711 1488580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:46:55.596764 1488580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:46:55.610539 1488580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
	I1225 13:46:55.611595 1488580 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:46:55.612320 1488580 main.go:141] libmachine: Using API Version  1
	I1225 13:46:55.612354 1488580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:46:55.613083 1488580 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:46:55.613417 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetState
	I1225 13:46:55.615267 1488580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I1225 13:46:55.615492 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:46:55.617411 1488580 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 13:46:55.615916 1488580 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:46:55.619109 1488580 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:46:55.619132 1488580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 13:46:55.619154 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:55.619633 1488580 main.go:141] libmachine: Using API Version  1
	I1225 13:46:55.619657 1488580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:46:55.620090 1488580 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:46:55.620672 1488580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:46:55.620737 1488580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:46:55.622856 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:55.623335 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:55.623366 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:55.623618 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHPort
	I1225 13:46:55.623877 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:55.624054 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHUsername
	I1225 13:46:55.624264 1488580 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636/id_rsa Username:docker}
	I1225 13:46:55.638392 1488580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45911
	I1225 13:46:55.638939 1488580 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:46:55.639537 1488580 main.go:141] libmachine: Using API Version  1
	I1225 13:46:55.639562 1488580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:46:55.640160 1488580 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:46:55.640374 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetState
	I1225 13:46:55.643494 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:46:55.643786 1488580 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 13:46:55.643810 1488580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 13:46:55.643843 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHHostname
	I1225 13:46:55.647549 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:55.648022 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:2d:e4", ip: ""} in network mk-newest-cni-058636: {Iface:virbr4 ExpiryTime:2023-12-25 14:46:14 +0000 UTC Type:0 Mac:52:54:00:9b:2d:e4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:newest-cni-058636 Clientid:01:52:54:00:9b:2d:e4}
	I1225 13:46:55.648060 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | domain newest-cni-058636 has defined IP address 192.168.39.39 and MAC address 52:54:00:9b:2d:e4 in network mk-newest-cni-058636
	I1225 13:46:55.648298 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHPort
	I1225 13:46:55.648540 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHKeyPath
	I1225 13:46:55.648740 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .GetSSHUsername
	I1225 13:46:55.648929 1488580 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/newest-cni-058636/id_rsa Username:docker}
	I1225 13:46:55.796049 1488580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 13:46:55.831187 1488580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 13:46:55.954008 1488580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 13:46:56.106263 1488580 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-058636" context rescaled to 1 replicas
	I1225 13:46:56.106319 1488580 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 13:46:56.108047 1488580 out.go:177] * Verifying Kubernetes components...
	I1225 13:46:56.109354 1488580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 13:46:56.545417 1488580 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1225 13:46:56.960094 1488580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.128859896s)
	I1225 13:46:56.960173 1488580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.006125332s)
	I1225 13:46:56.960223 1488580 main.go:141] libmachine: Making call to close driver server
	I1225 13:46:56.960242 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .Close
	I1225 13:46:56.960175 1488580 main.go:141] libmachine: Making call to close driver server
	I1225 13:46:56.960338 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .Close
	I1225 13:46:56.960794 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Closing plugin on server side
	I1225 13:46:56.960806 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Closing plugin on server side
	I1225 13:46:56.960813 1488580 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:46:56.960826 1488580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:46:56.960840 1488580 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:46:56.960849 1488580 main.go:141] libmachine: Making call to close driver server
	I1225 13:46:56.960852 1488580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:46:56.960861 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .Close
	I1225 13:46:56.960862 1488580 main.go:141] libmachine: Making call to close driver server
	I1225 13:46:56.960872 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .Close
	I1225 13:46:56.961140 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Closing plugin on server side
	I1225 13:46:56.961180 1488580 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:46:56.961205 1488580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:46:56.961520 1488580 main.go:141] libmachine: (newest-cni-058636) DBG | Closing plugin on server side
	I1225 13:46:56.961589 1488580 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:46:56.961599 1488580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:46:56.962336 1488580 api_server.go:52] waiting for apiserver process to appear ...
	I1225 13:46:56.962408 1488580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 13:46:56.993955 1488580 api_server.go:72] duration metric: took 887.565814ms to wait for apiserver process to appear ...
	I1225 13:46:56.993981 1488580 api_server.go:88] waiting for apiserver healthz status ...
	I1225 13:46:56.994000 1488580 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I1225 13:46:57.002703 1488580 main.go:141] libmachine: Making call to close driver server
	I1225 13:46:57.002732 1488580 main.go:141] libmachine: (newest-cni-058636) Calling .Close
	I1225 13:46:57.003025 1488580 main.go:141] libmachine: Successfully made call to close driver server
	I1225 13:46:57.003040 1488580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1225 13:46:57.004906 1488580 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1225 13:46:57.006153 1488580 addons.go:508] enable addons completed in 1.435102956s: enabled=[storage-provisioner default-storageclass]
	I1225 13:46:57.004053 1488580 api_server.go:279] https://192.168.39.39:8443/healthz returned 200:
	ok
	I1225 13:46:57.009626 1488580 api_server.go:141] control plane version: v1.29.0-rc.2
	I1225 13:46:57.009652 1488580 api_server.go:131] duration metric: took 15.666107ms to wait for apiserver health ...
	I1225 13:46:57.009660 1488580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 13:46:57.035575 1488580 system_pods.go:59] 8 kube-system pods found
	I1225 13:46:57.035659 1488580 system_pods.go:61] "coredns-76f75df574-5m69v" [b4ca3725-19a9-4e60-8eea-0f6649f6d9c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:46:57.035673 1488580 system_pods.go:61] "coredns-76f75df574-vlfzf" [e5ad14c9-91e4-4e82-b198-9f7aea25e79f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 13:46:57.035683 1488580 system_pods.go:61] "etcd-newest-cni-058636" [4016bc06-22d4-4b7d-9b3d-a2345d5acb9f] Running
	I1225 13:46:57.035700 1488580 system_pods.go:61] "kube-apiserver-newest-cni-058636" [fc2eb671-aa2d-4943-8324-d8a2aaca315e] Running
	I1225 13:46:57.035706 1488580 system_pods.go:61] "kube-controller-manager-newest-cni-058636" [2edcc7c5-228f-40df-826e-55db772016ed] Running
	I1225 13:46:57.035711 1488580 system_pods.go:61] "kube-proxy-sfqg6" [df44915a-8881-4f3a-968a-0061cdbbbb17] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 13:46:57.035722 1488580 system_pods.go:61] "kube-scheduler-newest-cni-058636" [94a87b27-3bfc-49b0-a9a3-bfb2db54d76a] Running
	I1225 13:46:57.035736 1488580 system_pods.go:61] "storage-provisioner" [2be01348-2058-4b07-abf2-13a23afa53a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 13:46:57.035747 1488580 system_pods.go:74] duration metric: took 26.081065ms to wait for pod list to return data ...
	I1225 13:46:57.035760 1488580 default_sa.go:34] waiting for default service account to be created ...
	I1225 13:46:57.039842 1488580 default_sa.go:45] found service account: "default"
	I1225 13:46:57.039874 1488580 default_sa.go:55] duration metric: took 4.104269ms for default service account to be created ...
	I1225 13:46:57.039886 1488580 kubeadm.go:581] duration metric: took 933.503521ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1225 13:46:57.039906 1488580 node_conditions.go:102] verifying NodePressure condition ...
	I1225 13:46:57.043765 1488580 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1225 13:46:57.043800 1488580 node_conditions.go:123] node cpu capacity is 2
	I1225 13:46:57.043816 1488580 node_conditions.go:105] duration metric: took 3.904503ms to run NodePressure ...
	I1225 13:46:57.043840 1488580 start.go:228] waiting for startup goroutines ...
	I1225 13:46:57.043850 1488580 start.go:233] waiting for cluster config update ...
	I1225 13:46:57.043863 1488580 start.go:242] writing updated cluster config ...
	I1225 13:46:57.044178 1488580 ssh_runner.go:195] Run: rm -f paused
	I1225 13:46:57.117997 1488580 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I1225 13:46:57.120191 1488580 out.go:177] * Done! kubectl is now configured to use "newest-cni-058636" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2023-12-25 13:26:02 UTC, ends at Mon 2023-12-25 13:47:51 UTC. --
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.391553463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703512071391531486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=7f4cfb76-e910-46f9-a7d6-7a0c08c78493 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.392250171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ebec6105-7f44-49fc-8170-09c24c71bc65 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.392319689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ebec6105-7f44-49fc-8170-09c24c71bc65 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.392660810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1703510843608920564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278192681968ebd4f81401794a1cc5b5dd6426a2821b042d8134565cbaad3cf,PodSandboxId:36aa4226da02008ebca03522998c66e6de98e2f2c033a537c5e8fc50c7b7947b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510824465737928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a84e545-a50b-403e-9963-1bf5157d9cde,},Annotations:map[string]string{io.kubernetes.container.hash: 327b23dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e,PodSandboxId:e3a8c0fdae79e7d9aac50a0a5141ed9fbfe48162215f99ba923bc9cf87b5ee86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1703510820300318277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pwk9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5856ad8d-6c49-4225-8890-4c912f839ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d20ac5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1703510812320780628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36,PodSandboxId:31bea21ee639089889d0178bf5552ae9f6f277315e241c4a640dde9c0d057d23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1703510812184337156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jbch6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af021a36-09e9
-4fba-8f23-cef46ed82aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f342d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83,PodSandboxId:980debbc80268076bced2c3d030319f03e82306b452282a7509d724f97682999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1703510805969845396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338605dc598a7e4187ea
3f5ef90f134a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0,PodSandboxId:8de4520c023254ceeb2f3c720719f73abe70d3f19c319c8855b525935184a742,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1703510805715416130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d3fc53b5b8bfda921184dee5cf991d,},Annotations:map[string]string{io.kub
ernetes.container.hash: fb57994f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f,PodSandboxId:ceecef539d3f7f9fa7f3cecf79744dac1df8fb7e08a4c82556684f26b8450722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1703510805564580424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2090eb0d558161c49f513eee6a2720,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: a3994894,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4,PodSandboxId:1491fefd67203be34cddf7275e1eee163b25571536431e5f89ec910d813eeddc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1703510805315974884,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f99e8d8aa6fd7d543933d989a9b8670,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ebec6105-7f44-49fc-8170-09c24c71bc65 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.436330510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ea6e94a8-e0a7-47fa-bf87-b83debe44d50 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.436420499Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ea6e94a8-e0a7-47fa-bf87-b83debe44d50 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.437550174Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=97813c3e-b22b-47ca-87d0-322ba6c76089 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.438215564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703512071438199055,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=97813c3e-b22b-47ca-87d0-322ba6c76089 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.438889057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1609dc56-efa3-42b3-b6c2-401212c05d01 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.438965906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1609dc56-efa3-42b3-b6c2-401212c05d01 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.439244400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1703510843608920564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278192681968ebd4f81401794a1cc5b5dd6426a2821b042d8134565cbaad3cf,PodSandboxId:36aa4226da02008ebca03522998c66e6de98e2f2c033a537c5e8fc50c7b7947b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510824465737928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a84e545-a50b-403e-9963-1bf5157d9cde,},Annotations:map[string]string{io.kubernetes.container.hash: 327b23dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e,PodSandboxId:e3a8c0fdae79e7d9aac50a0a5141ed9fbfe48162215f99ba923bc9cf87b5ee86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1703510820300318277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pwk9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5856ad8d-6c49-4225-8890-4c912f839ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d20ac5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1703510812320780628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36,PodSandboxId:31bea21ee639089889d0178bf5552ae9f6f277315e241c4a640dde9c0d057d23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1703510812184337156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jbch6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af021a36-09e9
-4fba-8f23-cef46ed82aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f342d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83,PodSandboxId:980debbc80268076bced2c3d030319f03e82306b452282a7509d724f97682999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1703510805969845396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338605dc598a7e4187ea
3f5ef90f134a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0,PodSandboxId:8de4520c023254ceeb2f3c720719f73abe70d3f19c319c8855b525935184a742,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1703510805715416130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d3fc53b5b8bfda921184dee5cf991d,},Annotations:map[string]string{io.kub
ernetes.container.hash: fb57994f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f,PodSandboxId:ceecef539d3f7f9fa7f3cecf79744dac1df8fb7e08a4c82556684f26b8450722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1703510805564580424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2090eb0d558161c49f513eee6a2720,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: a3994894,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4,PodSandboxId:1491fefd67203be34cddf7275e1eee163b25571536431e5f89ec910d813eeddc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1703510805315974884,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f99e8d8aa6fd7d543933d989a9b8670,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1609dc56-efa3-42b3-b6c2-401212c05d01 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.481733116Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=df667191-0ab1-4bb9-886d-91b4ac8f5c4d name=/runtime.v1.RuntimeService/Version
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.481874618Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=df667191-0ab1-4bb9-886d-91b4ac8f5c4d name=/runtime.v1.RuntimeService/Version
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.484657429Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f7c97e90-587a-4615-8182-47a6ad358d2f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.485400279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703512071485224233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=f7c97e90-587a-4615-8182-47a6ad358d2f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.488705555Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3fd7c624-6905-44be-8e65-489b568ce9c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.488790964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3fd7c624-6905-44be-8e65-489b568ce9c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.489212900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1703510843608920564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278192681968ebd4f81401794a1cc5b5dd6426a2821b042d8134565cbaad3cf,PodSandboxId:36aa4226da02008ebca03522998c66e6de98e2f2c033a537c5e8fc50c7b7947b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510824465737928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a84e545-a50b-403e-9963-1bf5157d9cde,},Annotations:map[string]string{io.kubernetes.container.hash: 327b23dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e,PodSandboxId:e3a8c0fdae79e7d9aac50a0a5141ed9fbfe48162215f99ba923bc9cf87b5ee86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1703510820300318277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pwk9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5856ad8d-6c49-4225-8890-4c912f839ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d20ac5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1703510812320780628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36,PodSandboxId:31bea21ee639089889d0178bf5552ae9f6f277315e241c4a640dde9c0d057d23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1703510812184337156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jbch6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af021a36-09e9
-4fba-8f23-cef46ed82aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f342d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83,PodSandboxId:980debbc80268076bced2c3d030319f03e82306b452282a7509d724f97682999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1703510805969845396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338605dc598a7e4187ea
3f5ef90f134a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0,PodSandboxId:8de4520c023254ceeb2f3c720719f73abe70d3f19c319c8855b525935184a742,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1703510805715416130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d3fc53b5b8bfda921184dee5cf991d,},Annotations:map[string]string{io.kub
ernetes.container.hash: fb57994f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f,PodSandboxId:ceecef539d3f7f9fa7f3cecf79744dac1df8fb7e08a4c82556684f26b8450722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1703510805564580424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2090eb0d558161c49f513eee6a2720,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: a3994894,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4,PodSandboxId:1491fefd67203be34cddf7275e1eee163b25571536431e5f89ec910d813eeddc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1703510805315974884,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f99e8d8aa6fd7d543933d989a9b8670,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3fd7c624-6905-44be-8e65-489b568ce9c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.532852830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4ae75947-e024-4af5-b3a8-64def8a8c56f name=/runtime.v1.RuntimeService/Version
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.532911197Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4ae75947-e024-4af5-b3a8-64def8a8c56f name=/runtime.v1.RuntimeService/Version
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.534280892Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=909c9bea-1bdc-4562-85d5-8d8577d3f842 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.534676966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703512071534660959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=909c9bea-1bdc-4562-85d5-8d8577d3f842 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.535250478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4b1cef5f-1168-4d3e-95e7-f27134255edb name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.535353524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4b1cef5f-1168-4d3e-95e7-f27134255edb name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:47:51 no-preload-330063 crio[717]: time="2023-12-25 13:47:51.535689613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1703510843608920564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278192681968ebd4f81401794a1cc5b5dd6426a2821b042d8134565cbaad3cf,PodSandboxId:36aa4226da02008ebca03522998c66e6de98e2f2c033a537c5e8fc50c7b7947b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510824465737928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a84e545-a50b-403e-9963-1bf5157d9cde,},Annotations:map[string]string{io.kubernetes.container.hash: 327b23dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e,PodSandboxId:e3a8c0fdae79e7d9aac50a0a5141ed9fbfe48162215f99ba923bc9cf87b5ee86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1703510820300318277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pwk9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5856ad8d-6c49-4225-8890-4c912f839ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d20ac5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a,PodSandboxId:c74d378a7ce6ded6932c2d5ab706b63b92a4a8766b24bf8acb43084ef5cfb6d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1703510812320780628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7097decf-3a19-454b-9c87-df6cb2da4de4,},Annotations:map[string]string{io.kubernetes.container.hash: 83859a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36,PodSandboxId:31bea21ee639089889d0178bf5552ae9f6f277315e241c4a640dde9c0d057d23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1703510812184337156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jbch6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af021a36-09e9
-4fba-8f23-cef46ed82aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 4f342d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83,PodSandboxId:980debbc80268076bced2c3d030319f03e82306b452282a7509d724f97682999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1703510805969845396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338605dc598a7e4187ea
3f5ef90f134a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0,PodSandboxId:8de4520c023254ceeb2f3c720719f73abe70d3f19c319c8855b525935184a742,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1703510805715416130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d3fc53b5b8bfda921184dee5cf991d,},Annotations:map[string]string{io.kub
ernetes.container.hash: fb57994f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f,PodSandboxId:ceecef539d3f7f9fa7f3cecf79744dac1df8fb7e08a4c82556684f26b8450722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1703510805564580424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2090eb0d558161c49f513eee6a2720,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: a3994894,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4,PodSandboxId:1491fefd67203be34cddf7275e1eee163b25571536431e5f89ec910d813eeddc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1703510805315974884,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-330063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f99e8d8aa6fd7d543933d989a9b8670,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4b1cef5f-1168-4d3e-95e7-f27134255edb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f22e0dc3ae98f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   c74d378a7ce6d       storage-provisioner
	e278192681968       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   36aa4226da020       busybox
	7ed64b4585957       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   e3a8c0fdae79e       coredns-76f75df574-pwk9h
	41d1cc3530c54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   c74d378a7ce6d       storage-provisioner
	b9051ad32027d       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      20 minutes ago      Running             kube-proxy                1                   31bea21ee6390       kube-proxy-jbch6
	3562a602302de       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      21 minutes ago      Running             kube-scheduler            1                   980debbc80268       kube-scheduler-no-preload-330063
	6d72676ee211f       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      21 minutes ago      Running             etcd                      1                   8de4520c02325       etcd-no-preload-330063
	ccc0750bcacd5       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      21 minutes ago      Running             kube-apiserver            1                   ceecef539d3f7       kube-apiserver-no-preload-330063
	ddc7a61af803e       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      21 minutes ago      Running             kube-controller-manager   1                   1491fefd67203       kube-controller-manager-no-preload-330063
	
	
	==> coredns [7ed64b4585957ba544aced3e9496413bb84edacb0e2298bcecd6f21f3af56e5e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40030 - 9349 "HINFO IN 7359491548542591292.800707443245296279. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009390068s
	
	
	==> describe nodes <==
	Name:               no-preload-330063
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-330063
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=no-preload-330063
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_25T13_19_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 13:18:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-330063
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Dec 2023 13:47:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 13:47:44 +0000   Mon, 25 Dec 2023 13:18:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 13:47:44 +0000   Mon, 25 Dec 2023 13:18:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 13:47:44 +0000   Mon, 25 Dec 2023 13:18:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 13:47:44 +0000   Mon, 25 Dec 2023 13:27:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.232
	  Hostname:    no-preload-330063
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 406372a65c9a43bf87e8eb26880385d4
	  System UUID:                406372a6-5c9a-43bf-87e8-eb26880385d4
	  Boot ID:                    23814a5a-2071-47fa-b212-ea86c8e3f921
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-76f75df574-pwk9h                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-330063                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-330063             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-330063    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-jbch6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-330063             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-q97kl              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-330063 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-330063 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-330063 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-330063 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-330063 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-330063 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-330063 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-330063 event: Registered Node no-preload-330063 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node no-preload-330063 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node no-preload-330063 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node no-preload-330063 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-330063 event: Registered Node no-preload-330063 in Controller
	
	
	==> dmesg <==
	[Dec25 13:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072628] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.414624] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec25 13:26] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149826] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.433258] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.371786] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.112521] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.175395] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.132801] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.256668] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[ +29.080641] systemd-fstab-generator[1334]: Ignoring "noauto" for root device
	[ +15.448146] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [6d72676ee211f16049bd236720a31991663b9aaf880994daa46705de6fffefd0] <==
	{"level":"warn","ts":"2023-12-25T13:27:03.964339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.958264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2023-12-25T13:27:03.964437Z","caller":"traceutil/trace.go:171","msg":"trace[1119136746] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:580; }","duration":"183.998882ms","start":"2023-12-25T13:27:03.780362Z","end":"2023-12-25T13:27:03.964361Z","steps":["trace[1119136746] 'agreement among raft nodes before linearized reading'  (duration: 183.919723ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T13:27:04.360552Z","caller":"traceutil/trace.go:171","msg":"trace[1645593883] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:614; }","duration":"378.566202ms","start":"2023-12-25T13:27:03.981972Z","end":"2023-12-25T13:27:04.360538Z","steps":["trace[1645593883] 'read index received'  (duration: 372.899014ms)","trace[1645593883] 'applied index is now lower than readState.Index'  (duration: 5.666442ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-25T13:27:04.360982Z","caller":"traceutil/trace.go:171","msg":"trace[59349680] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"382.91078ms","start":"2023-12-25T13:27:03.978058Z","end":"2023-12-25T13:27:04.360969Z","steps":["trace[59349680] 'process raft request'  (duration: 377.023041ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:04.361269Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.978039Z","time spent":"383.066941ms","remote":"127.0.0.1:46010","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5422,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-no-preload-330063\" mod_revision:580 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-no-preload-330063\" value_size:5365 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-no-preload-330063\" > >"}
	{"level":"warn","ts":"2023-12-25T13:27:04.361471Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"379.535237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2023-12-25T13:27:04.361529Z","caller":"traceutil/trace.go:171","msg":"trace[253838963] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:581; }","duration":"379.639863ms","start":"2023-12-25T13:27:03.981879Z","end":"2023-12-25T13:27:04.361519Z","steps":["trace[253838963] 'agreement among raft nodes before linearized reading'  (duration: 379.558834ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:04.361589Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.981858Z","time spent":"379.719802ms","remote":"127.0.0.1:46014","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":217,"request content":"key:\"/registry/serviceaccounts/kube-system/node-controller\" "}
	{"level":"warn","ts":"2023-12-25T13:27:04.361749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.023112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-330063\" ","response":"range_response_count:1 size:5437"}
	{"level":"info","ts":"2023-12-25T13:27:04.361798Z","caller":"traceutil/trace.go:171","msg":"trace[669444436] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-330063; range_end:; response_count:1; response_revision:581; }","duration":"172.070726ms","start":"2023-12-25T13:27:04.189719Z","end":"2023-12-25T13:27:04.36179Z","steps":["trace[669444436] 'agreement among raft nodes before linearized reading'  (duration: 172.005959ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T13:36:49.470484Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":836}
	{"level":"info","ts":"2023-12-25T13:36:49.474631Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":836,"took":"3.577731ms","hash":902266057}
	{"level":"info","ts":"2023-12-25T13:36:49.474764Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":902266057,"revision":836,"compact-revision":-1}
	{"level":"info","ts":"2023-12-25T13:41:49.48075Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1078}
	{"level":"info","ts":"2023-12-25T13:41:49.482988Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1078,"took":"1.59662ms","hash":1002926527}
	{"level":"info","ts":"2023-12-25T13:41:49.483077Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1002926527,"revision":1078,"compact-revision":836}
	{"level":"info","ts":"2023-12-25T13:46:30.267502Z","caller":"traceutil/trace.go:171","msg":"trace[660424890] linearizableReadLoop","detail":"{readStateIndex:1823; appliedIndex:1822; }","duration":"358.155843ms","start":"2023-12-25T13:46:29.909231Z","end":"2023-12-25T13:46:30.267387Z","steps":["trace[660424890] 'read index received'  (duration: 357.809144ms)","trace[660424890] 'applied index is now lower than readState.Index'  (duration: 346.188µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-25T13:46:30.267631Z","caller":"traceutil/trace.go:171","msg":"trace[1509621500] transaction","detail":"{read_only:false; response_revision:1548; number_of_response:1; }","duration":"370.140627ms","start":"2023-12-25T13:46:29.897465Z","end":"2023-12-25T13:46:30.267606Z","steps":["trace[1509621500] 'process raft request'  (duration: 369.72174ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:46:30.268005Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:46:29.897443Z","time spent":"370.288914ms","remote":"127.0.0.1:46006","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1547 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-12-25T13:46:30.268263Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"358.59257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-25T13:46:30.268341Z","caller":"traceutil/trace.go:171","msg":"trace[1234247686] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1548; }","duration":"359.228822ms","start":"2023-12-25T13:46:29.909096Z","end":"2023-12-25T13:46:30.268324Z","steps":["trace[1234247686] 'agreement among raft nodes before linearized reading'  (duration: 358.567082ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:46:30.268451Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:46:29.909082Z","time spent":"359.358946ms","remote":"127.0.0.1:45960","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-12-25T13:46:49.49187Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1320}
	{"level":"info","ts":"2023-12-25T13:46:49.493656Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1320,"took":"1.442003ms","hash":173761428}
	{"level":"info","ts":"2023-12-25T13:46:49.493736Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":173761428,"revision":1320,"compact-revision":1078}
	
	
	==> kernel <==
	 13:47:51 up 21 min,  0 users,  load average: 0.11, 0.26, 0.21
	Linux no-preload-330063 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [ccc0750bcacd58e99ac0b4e57fcc58ad31c3baebf6dc546f5b037c09e7764c2f] <==
	I1225 13:42:51.983923       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:44:51.982597       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:44:51.982755       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:44:51.982768       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:44:51.984072       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:44:51.984338       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:44:51.984480       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:46:50.987461       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:46:50.987837       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1225 13:46:51.988048       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:46:51.988423       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:46:51.988466       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:46:51.988501       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:46:51.988577       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:46:51.989676       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:47:51.989718       1 handler_proxy.go:93] no RequestInfo found in the context
	W1225 13:47:51.989800       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:47:51.989825       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:47:51.989836       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1225 13:47:51.989888       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:47:51.991950       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ddc7a61af803ebd648d7c2828f70a635d7abd27ab9a57252120f18506456e6b4] <==
	I1225 13:42:05.444316       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:42:34.872647       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:42:35.453651       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1225 13:42:53.332963       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="242.193µs"
	E1225 13:43:04.880085       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:43:05.462261       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1225 13:43:07.332741       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="97.786µs"
	E1225 13:43:34.885917       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:43:35.474100       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:44:04.891813       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:44:05.484956       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:44:34.897722       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:44:35.494062       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:45:04.906814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:45:05.502412       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:45:34.912462       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:45:35.511665       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:46:04.919513       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:46:05.524851       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:46:34.929463       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:46:35.536914       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:47:04.935563       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:47:05.547488       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:47:34.942687       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:47:35.556995       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b9051ad32027de90f80a325f97287549aae8959e611088dca4386a54c79a3d36] <==
	I1225 13:26:52.668728       1 server_others.go:72] "Using iptables proxy"
	I1225 13:26:52.684970       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.232"]
	I1225 13:26:52.729430       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1225 13:26:52.729478       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1225 13:26:52.729494       1 server_others.go:168] "Using iptables Proxier"
	I1225 13:26:52.732746       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1225 13:26:52.733187       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I1225 13:26:52.733225       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 13:26:52.734047       1 config.go:188] "Starting service config controller"
	I1225 13:26:52.734101       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1225 13:26:52.734192       1 config.go:97] "Starting endpoint slice config controller"
	I1225 13:26:52.734199       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1225 13:26:52.736796       1 config.go:315] "Starting node config controller"
	I1225 13:26:52.736882       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1225 13:26:52.834978       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1225 13:26:52.835057       1 shared_informer.go:318] Caches are synced for service config
	I1225 13:26:52.837903       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3562a602302de62eec1a7805ccf37449eae6e6e5a59246136290a4e4b7f29b83] <==
	I1225 13:26:48.332820       1 serving.go:380] Generated self-signed cert in-memory
	W1225 13:26:50.947694       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 13:26:50.947815       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 13:26:50.947902       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 13:26:50.947908       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 13:26:51.001217       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I1225 13:26:51.001297       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 13:26:51.002713       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 13:26:51.002815       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1225 13:26:51.006617       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1225 13:26:51.008307       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1225 13:26:51.103745       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2023-12-25 13:26:02 UTC, ends at Mon 2023-12-25 13:47:52 UTC. --
	Dec 25 13:45:44 no-preload-330063 kubelet[1340]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:45:44 no-preload-330063 kubelet[1340]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:45:44 no-preload-330063 kubelet[1340]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:45:48 no-preload-330063 kubelet[1340]: E1225 13:45:48.318317    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:46:01 no-preload-330063 kubelet[1340]: E1225 13:46:01.314184    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:46:16 no-preload-330063 kubelet[1340]: E1225 13:46:16.316466    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:46:27 no-preload-330063 kubelet[1340]: E1225 13:46:27.314422    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:46:41 no-preload-330063 kubelet[1340]: E1225 13:46:41.314230    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:46:44 no-preload-330063 kubelet[1340]: E1225 13:46:44.303253    1340 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Dec 25 13:46:44 no-preload-330063 kubelet[1340]: E1225 13:46:44.328492    1340 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:46:44 no-preload-330063 kubelet[1340]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:46:44 no-preload-330063 kubelet[1340]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:46:44 no-preload-330063 kubelet[1340]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:46:54 no-preload-330063 kubelet[1340]: E1225 13:46:54.315462    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:47:09 no-preload-330063 kubelet[1340]: E1225 13:47:09.314300    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:47:20 no-preload-330063 kubelet[1340]: E1225 13:47:20.315541    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:47:33 no-preload-330063 kubelet[1340]: E1225 13:47:33.314461    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	Dec 25 13:47:44 no-preload-330063 kubelet[1340]: E1225 13:47:44.329793    1340 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:47:44 no-preload-330063 kubelet[1340]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:47:44 no-preload-330063 kubelet[1340]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:47:44 no-preload-330063 kubelet[1340]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:47:47 no-preload-330063 kubelet[1340]: E1225 13:47:47.328749    1340 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 25 13:47:47 no-preload-330063 kubelet[1340]: E1225 13:47:47.329076    1340 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 25 13:47:47 no-preload-330063 kubelet[1340]: E1225 13:47:47.329425    1340 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qc8t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-q97kl_kube-system(4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 25 13:47:47 no-preload-330063 kubelet[1340]: E1225 13:47:47.329543    1340 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-q97kl" podUID="4250fbad-2c2f-4ae5-ac16-c1a4425c5dcc"
	
	
	==> storage-provisioner [41d1cc3530c54dd0132eb930ff1e1c038e9290bb8af8effe488655cbf057e00a] <==
	I1225 13:26:52.671292       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 13:27:22.679636       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f22e0dc3ae98f54ec9642d89b16b5a31e5e9fba155bda1662bfdb706c17c64f3] <==
	I1225 13:27:23.789654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 13:27:23.806721       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 13:27:23.806925       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 13:27:41.221597       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 13:27:41.221962       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-330063_c10e988d-6412-408b-b4d2-af4d7ed42296!
	I1225 13:27:41.226612       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"984abe25-ea8f-40ab-a01d-41b1db70758a", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-330063_c10e988d-6412-408b-b4d2-af4d7ed42296 became leader
	I1225 13:27:41.323956       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-330063_c10e988d-6412-408b-b4d2-af4d7ed42296!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-330063 -n no-preload-330063
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-330063 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-q97kl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-330063 describe pod metrics-server-57f55c9bc5-q97kl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-330063 describe pod metrics-server-57f55c9bc5-q97kl: exit status 1 (71.447227ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-q97kl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-330063 describe pod metrics-server-57f55c9bc5-q97kl: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (452.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1225 13:41:26.363256 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 13:42:49.413501 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 13:43:56.706329 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 13:44:07.347717 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880612 -n embed-certs-880612
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-25 13:49:32.081526756 +0000 UTC m=+5596.700003790
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-880612 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-880612 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (66.051967ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-880612 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880612 -n embed-certs-880612
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-880612 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-880612 logs -n 25: (1.360054826s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-021022                              | cert-expiration-021022       | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:19 UTC |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:19 UTC | 25 Dec 23 13:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-176938                              | stopped-upgrade-176938       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	| delete  | -p                                                     | disable-driver-mounts-246503 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:20 UTC |
	|         | disable-driver-mounts-246503                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:22 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-198979             | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:20 UTC | 25 Dec 23 13:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-330063                  | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880612            | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-344803  | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC | 25 Dec 23 13:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:22 UTC |                     |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880612                 | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880612                                  | embed-certs-880612           | jenkins | v1.32.0 | 25 Dec 23 13:24 UTC | 25 Dec 23 13:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-344803       | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-344803 | jenkins | v1.32.0 | 25 Dec 23 13:25 UTC | 25 Dec 23 13:36 UTC |
	|         | default-k8s-diff-port-344803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-198979                              | old-k8s-version-198979       | jenkins | v1.32.0 | 25 Dec 23 13:45 UTC | 25 Dec 23 13:45 UTC |
	| start   | -p newest-cni-058636 --memory=2200 --alsologtostderr   | newest-cni-058636            | jenkins | v1.32.0 | 25 Dec 23 13:45 UTC | 25 Dec 23 13:46 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-058636             | newest-cni-058636            | jenkins | v1.32.0 | 25 Dec 23 13:46 UTC | 25 Dec 23 13:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-058636                                   | newest-cni-058636            | jenkins | v1.32.0 | 25 Dec 23 13:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-330063                                   | no-preload-330063            | jenkins | v1.32.0 | 25 Dec 23 13:47 UTC | 25 Dec 23 13:47 UTC |
	| start   | -p auto-712615 --memory=3072                           | auto-712615                  | jenkins | v1.32.0 | 25 Dec 23 13:47 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-058636                  | newest-cni-058636            | jenkins | v1.32.0 | 25 Dec 23 13:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-058636 --memory=2200 --alsologtostderr   | newest-cni-058636            | jenkins | v1.32.0 | 25 Dec 23 13:49 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 13:49:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 13:49:30.860393 1489961 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:49:30.860512 1489961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:49:30.860522 1489961 out.go:309] Setting ErrFile to fd 2...
	I1225 13:49:30.860527 1489961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:49:30.860722 1489961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:49:30.861325 1489961 out.go:303] Setting JSON to false
	I1225 13:49:30.862346 1489961 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":160324,"bootTime":1703351847,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 13:49:30.862415 1489961 start.go:138] virtualization: kvm guest
	I1225 13:49:30.866011 1489961 out.go:177] * [newest-cni-058636] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 13:49:30.867684 1489961 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 13:49:30.867712 1489961 notify.go:220] Checking for updates...
	I1225 13:49:30.869552 1489961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 13:49:30.871852 1489961 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:49:30.873655 1489961 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:49:30.876631 1489961 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 13:49:30.878619 1489961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 13:49:30.880639 1489961 config.go:182] Loaded profile config "newest-cni-058636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:49:30.881377 1489961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:49:30.881506 1489961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:49:30.898043 1489961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45573
	I1225 13:49:30.898567 1489961 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:49:30.899187 1489961 main.go:141] libmachine: Using API Version  1
	I1225 13:49:30.899210 1489961 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:49:30.899573 1489961 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:49:30.899763 1489961 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:49:30.900016 1489961 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 13:49:30.900328 1489961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:49:30.900392 1489961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:49:30.916587 1489961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46253
	I1225 13:49:30.917086 1489961 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:49:30.917650 1489961 main.go:141] libmachine: Using API Version  1
	I1225 13:49:30.917684 1489961 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:49:30.918101 1489961 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:49:30.918299 1489961 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:49:30.959908 1489961 out.go:177] * Using the kvm2 driver based on existing profile
	I1225 13:49:30.961270 1489961 start.go:298] selected driver: kvm2
	I1225 13:49:30.961283 1489961 start.go:902] validating driver "kvm2" against &{Name:newest-cni-058636 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-058636 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Scheduled
Stop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:49:30.961425 1489961 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 13:49:30.962146 1489961 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:49:30.962257 1489961 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 13:49:30.978816 1489961 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 13:49:30.979263 1489961 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1225 13:49:30.979344 1489961 cni.go:84] Creating CNI manager for ""
	I1225 13:49:30.979358 1489961 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 13:49:30.979371 1489961 start_flags.go:323] config:
	{Name:newest-cni-058636 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-058636 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:49:30.979563 1489961 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:49:30.981410 1489961 out.go:177] * Starting control plane node newest-cni-058636 in cluster newest-cni-058636
	I1225 13:49:30.982700 1489961 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1225 13:49:30.982755 1489961 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1225 13:49:30.982769 1489961 cache.go:56] Caching tarball of preloaded images
	I1225 13:49:30.982864 1489961 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 13:49:30.982881 1489961 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1225 13:49:30.983022 1489961 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/config.json ...
	I1225 13:49:30.983267 1489961 start.go:365] acquiring machines lock for newest-cni-058636: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:49:30.983319 1489961 start.go:369] acquired machines lock for "newest-cni-058636" in 30.856µs
	I1225 13:49:30.983340 1489961 start.go:96] Skipping create...Using existing machine configuration
	I1225 13:49:30.983350 1489961 fix.go:54] fixHost starting: 
	I1225 13:49:30.983637 1489961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:49:30.983677 1489961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:49:31.000270 1489961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38021
	I1225 13:49:31.000761 1489961 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:49:31.001407 1489961 main.go:141] libmachine: Using API Version  1
	I1225 13:49:31.001444 1489961 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:49:31.001794 1489961 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:49:31.002023 1489961 main.go:141] libmachine: (newest-cni-058636) Calling .DriverName
	I1225 13:49:31.002222 1489961 main.go:141] libmachine: (newest-cni-058636) Calling .GetState
	I1225 13:49:31.004168 1489961 fix.go:102] recreateIfNeeded on newest-cni-058636: state=Running err=<nil>
	W1225 13:49:31.004201 1489961 fix.go:128] unexpected machine state, will restart: <nil>
	I1225 13:49:31.007190 1489961 out.go:177] * Updating the running kvm2 "newest-cni-058636" VM ...
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2023-12-25 13:26:25 UTC, ends at Mon 2023-12-25 13:49:33 UTC. --
	Dec 25 13:49:32 embed-certs-880612 crio[725]: time="2023-12-25 13:49:32.927680266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703512172927667845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=bde17a98-7500-463d-b177-1bb86b083e36 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:49:32 embed-certs-880612 crio[725]: time="2023-12-25 13:49:32.928474079Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aeaf8f40-7a9e-4ef4-8cf5-48ba666ac490 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:49:32 embed-certs-880612 crio[725]: time="2023-12-25 13:49:32.928521106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aeaf8f40-7a9e-4ef4-8cf5-48ba666ac490 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:49:32 embed-certs-880612 crio[725]: time="2023-12-25 13:49:32.928711935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510854743517342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c807-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55ffef136c76be1cb867b4c4d9753939f7f3879d31b1a949fec922ede380e5d2,PodSandboxId:6eef49ee6443c2c143d21ef7e952b854ef7dc70997024a952018296c871fdf95,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510839434660873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22ab1036-0223-4df4-8c3d-ea4eb111089c,},Annotations:map[string]string{io.kubernetes.container.hash: c20fe0a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4,PodSandboxId:9e278279bae5074a68a2173c176ce5a2a2d459e113efce8550b34c643c706ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703510830648791095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sbn7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de44565-3ada-41a3-bcf0-b9229d3edab8,},Annotations:map[string]string{io.kubernetes.container.hash: 597ec067,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6,PodSandboxId:35b5ee6655e59dced29bc9fdb1d68aaac2e90e482eccf64cee9712d0794baa0f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703510824360413888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4f790b-
a982-4613-b671-c45f037503d9,},Annotations:map[string]string{io.kubernetes.container.hash: 91e97d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703510824162584899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c80
7-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480,PodSandboxId:dc1ed619fa80d14ae9d4f30a871498603b02afd3a445b1b02f04ba4d19996e22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703510815261103225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bde65b4d6cb252e85
87dc9f11057b41,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0,PodSandboxId:1e294a76c33e3f9340e06618eaabe827c2fb6cea75e5ef782a2db0ed35879add,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703510815075342721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc1bf0c03348c1bb22a32100d83871c7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e,PodSandboxId:6332b1316abf7f2e50f4e117edb9b8d3fb8adf760dadb65122d4cc99ff21275b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703510814785049695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6125e6d43fa5fae962ca8ca79893bcbf,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 341c5164,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df,PodSandboxId:b7aa8697e2cc4c8d0753dc660caf085d0c198eca4730c27744cad53eac89bbd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703510814551650602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42018a8975aaea5aada1337c95617dd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a8f0cd9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aeaf8f40-7a9e-4ef4-8cf5-48ba666ac490 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:49:32 embed-certs-880612 crio[725]: time="2023-12-25 13:49:32.969793332Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=28edb39e-e073-4e00-93ed-9dd3d1ef57d9 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:49:32 embed-certs-880612 crio[725]: time="2023-12-25 13:49:32.969853081Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=28edb39e-e073-4e00-93ed-9dd3d1ef57d9 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:49:32 embed-certs-880612 crio[725]: time="2023-12-25 13:49:32.971734037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f0f0e29f-f978-4bae-a7d6-6ef5b10b148a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:49:32 embed-certs-880612 crio[725]: time="2023-12-25 13:49:32.972337668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703512172972319200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f0f0e29f-f978-4bae-a7d6-6ef5b10b148a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:49:32 embed-certs-880612 crio[725]: time="2023-12-25 13:49:32.972965642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9a300534-12f1-4938-aa28-0926c4717703 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:49:32 embed-certs-880612 crio[725]: time="2023-12-25 13:49:32.973009357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9a300534-12f1-4938-aa28-0926c4717703 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:49:32 embed-certs-880612 crio[725]: time="2023-12-25 13:49:32.973279570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510854743517342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c807-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55ffef136c76be1cb867b4c4d9753939f7f3879d31b1a949fec922ede380e5d2,PodSandboxId:6eef49ee6443c2c143d21ef7e952b854ef7dc70997024a952018296c871fdf95,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510839434660873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22ab1036-0223-4df4-8c3d-ea4eb111089c,},Annotations:map[string]string{io.kubernetes.container.hash: c20fe0a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4,PodSandboxId:9e278279bae5074a68a2173c176ce5a2a2d459e113efce8550b34c643c706ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703510830648791095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sbn7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de44565-3ada-41a3-bcf0-b9229d3edab8,},Annotations:map[string]string{io.kubernetes.container.hash: 597ec067,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6,PodSandboxId:35b5ee6655e59dced29bc9fdb1d68aaac2e90e482eccf64cee9712d0794baa0f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703510824360413888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4f790b-
a982-4613-b671-c45f037503d9,},Annotations:map[string]string{io.kubernetes.container.hash: 91e97d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703510824162584899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c80
7-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480,PodSandboxId:dc1ed619fa80d14ae9d4f30a871498603b02afd3a445b1b02f04ba4d19996e22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703510815261103225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bde65b4d6cb252e85
87dc9f11057b41,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0,PodSandboxId:1e294a76c33e3f9340e06618eaabe827c2fb6cea75e5ef782a2db0ed35879add,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703510815075342721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc1bf0c03348c1bb22a32100d83871c7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e,PodSandboxId:6332b1316abf7f2e50f4e117edb9b8d3fb8adf760dadb65122d4cc99ff21275b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703510814785049695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6125e6d43fa5fae962ca8ca79893bcbf,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 341c5164,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df,PodSandboxId:b7aa8697e2cc4c8d0753dc660caf085d0c198eca4730c27744cad53eac89bbd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703510814551650602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42018a8975aaea5aada1337c95617dd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a8f0cd9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9a300534-12f1-4938-aa28-0926c4717703 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.013639440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=26186529-e8c3-424d-8d31-832348520371 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.013699651Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=26186529-e8c3-424d-8d31-832348520371 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.015249385Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a6813588-08f2-4e92-a12f-17f565a3aa06 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.016103257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703512173016083505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a6813588-08f2-4e92-a12f-17f565a3aa06 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.016818852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=31d58879-79d9-4247-8f6d-7fad4c4dd8f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.016917163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=31d58879-79d9-4247-8f6d-7fad4c4dd8f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.017220267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510854743517342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c807-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55ffef136c76be1cb867b4c4d9753939f7f3879d31b1a949fec922ede380e5d2,PodSandboxId:6eef49ee6443c2c143d21ef7e952b854ef7dc70997024a952018296c871fdf95,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510839434660873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22ab1036-0223-4df4-8c3d-ea4eb111089c,},Annotations:map[string]string{io.kubernetes.container.hash: c20fe0a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4,PodSandboxId:9e278279bae5074a68a2173c176ce5a2a2d459e113efce8550b34c643c706ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703510830648791095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sbn7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de44565-3ada-41a3-bcf0-b9229d3edab8,},Annotations:map[string]string{io.kubernetes.container.hash: 597ec067,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6,PodSandboxId:35b5ee6655e59dced29bc9fdb1d68aaac2e90e482eccf64cee9712d0794baa0f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703510824360413888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4f790b-
a982-4613-b671-c45f037503d9,},Annotations:map[string]string{io.kubernetes.container.hash: 91e97d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703510824162584899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c80
7-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480,PodSandboxId:dc1ed619fa80d14ae9d4f30a871498603b02afd3a445b1b02f04ba4d19996e22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703510815261103225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bde65b4d6cb252e85
87dc9f11057b41,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0,PodSandboxId:1e294a76c33e3f9340e06618eaabe827c2fb6cea75e5ef782a2db0ed35879add,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703510815075342721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc1bf0c03348c1bb22a32100d83871c7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e,PodSandboxId:6332b1316abf7f2e50f4e117edb9b8d3fb8adf760dadb65122d4cc99ff21275b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703510814785049695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6125e6d43fa5fae962ca8ca79893bcbf,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 341c5164,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df,PodSandboxId:b7aa8697e2cc4c8d0753dc660caf085d0c198eca4730c27744cad53eac89bbd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703510814551650602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42018a8975aaea5aada1337c95617dd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a8f0cd9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=31d58879-79d9-4247-8f6d-7fad4c4dd8f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.055616531Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e53147b0-0e82-4a83-9a6a-603c9879e488 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.055681130Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e53147b0-0e82-4a83-9a6a-603c9879e488 name=/runtime.v1.RuntimeService/Version
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.057686897Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e0228ac0-74f4-480f-ae8b-9e4e59f2d348 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.058340324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703512173058317035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e0228ac0-74f4-480f-ae8b-9e4e59f2d348 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.059106168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=99d5887e-a3eb-4777-8242-bf20e01ce0bd name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.059280673Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=99d5887e-a3eb-4777-8242-bf20e01ce0bd name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:49:33 embed-certs-880612 crio[725]: time="2023-12-25 13:49:33.059675850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703510854743517342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c807-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55ffef136c76be1cb867b4c4d9753939f7f3879d31b1a949fec922ede380e5d2,PodSandboxId:6eef49ee6443c2c143d21ef7e952b854ef7dc70997024a952018296c871fdf95,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1703510839434660873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22ab1036-0223-4df4-8c3d-ea4eb111089c,},Annotations:map[string]string{io.kubernetes.container.hash: c20fe0a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4,PodSandboxId:9e278279bae5074a68a2173c176ce5a2a2d459e113efce8550b34c643c706ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703510830648791095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sbn7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de44565-3ada-41a3-bcf0-b9229d3edab8,},Annotations:map[string]string{io.kubernetes.container.hash: 597ec067,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6,PodSandboxId:35b5ee6655e59dced29bc9fdb1d68aaac2e90e482eccf64cee9712d0794baa0f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703510824360413888,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4f790b-
a982-4613-b671-c45f037503d9,},Annotations:map[string]string{io.kubernetes.container.hash: 91e97d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7,PodSandboxId:b6c7a9f93ec8e4e7437a53c1581fa11af8b7caa8ebf67d4767901df13abfd9b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1703510824162584899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34fa49ce-c80
7-4f30-9be6-317676447640,},Annotations:map[string]string{io.kubernetes.container.hash: 1c067c06,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480,PodSandboxId:dc1ed619fa80d14ae9d4f30a871498603b02afd3a445b1b02f04ba4d19996e22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703510815261103225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bde65b4d6cb252e85
87dc9f11057b41,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0,PodSandboxId:1e294a76c33e3f9340e06618eaabe827c2fb6cea75e5ef782a2db0ed35879add,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703510815075342721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc1bf0c03348c1bb22a32100d83871c7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e,PodSandboxId:6332b1316abf7f2e50f4e117edb9b8d3fb8adf760dadb65122d4cc99ff21275b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703510814785049695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6125e6d43fa5fae962ca8ca79893bcbf,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 341c5164,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df,PodSandboxId:b7aa8697e2cc4c8d0753dc660caf085d0c198eca4730c27744cad53eac89bbd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703510814551650602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42018a8975aaea5aada1337c95617dd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a8f0cd9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=99d5887e-a3eb-4777-8242-bf20e01ce0bd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0851cb5599abc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       3                   b6c7a9f93ec8e       storage-provisioner
	55ffef136c76b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   6eef49ee6443c       busybox
	ea6832c3489cd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      22 minutes ago      Running             coredns                   1                   9e278279bae50       coredns-5dd5756b68-sbn7n
	5a29e019e5e0d       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      22 minutes ago      Running             kube-proxy                1                   35b5ee6655e59       kube-proxy-677d7
	03bfbdc74bd6a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       2                   b6c7a9f93ec8e       storage-provisioner
	868a5855738ae       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      22 minutes ago      Running             kube-scheduler            1                   dc1ed619fa80d       kube-scheduler-embed-certs-880612
	e34911f64a889       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      22 minutes ago      Running             kube-controller-manager   1                   1e294a76c33e3       kube-controller-manager-embed-certs-880612
	9990b54a38a74       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      22 minutes ago      Running             etcd                      1                   6332b1316abf7       etcd-embed-certs-880612
	5ec3a53c74277       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      22 minutes ago      Running             kube-apiserver            1                   b7aa8697e2cc4       kube-apiserver-embed-certs-880612
	
	
	==> coredns [ea6832c3489cd26fe1a7142c9b5f26b5187572d8272fd9c60a04e80c3a6c15e4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40511 - 26727 "HINFO IN 4869349565427911480.5933393956858728803. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009754956s
	
	
	==> describe nodes <==
	Name:               embed-certs-880612
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-880612
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=embed-certs-880612
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_25T13_21_07_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 13:21:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-880612
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Dec 2023 13:49:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 13:47:55 +0000   Mon, 25 Dec 2023 13:21:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 13:47:55 +0000   Mon, 25 Dec 2023 13:21:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 13:47:55 +0000   Mon, 25 Dec 2023 13:21:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 13:47:55 +0000   Mon, 25 Dec 2023 13:27:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.179
	  Hostname:    embed-certs-880612
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 53a35066886d40559dab82026d1a57cf
	  System UUID:                53a35066-886d-4055-9dab-82026d1a57cf
	  Boot ID:                    9dd57709-c8a9-4fd4-af70-63cbbb7017c5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5dd5756b68-sbn7n                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-880612                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-embed-certs-880612             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-embed-certs-880612    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-677d7                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-880612             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-chnh2               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-880612 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-880612 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-880612 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node embed-certs-880612 status is now: NodeReady
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-880612 event: Registered Node embed-certs-880612 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-880612 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-880612 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-880612 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-880612 event: Registered Node embed-certs-880612 in Controller
	
	
	==> dmesg <==
	[Dec25 13:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071699] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.519574] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.540576] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156258] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.523338] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.637873] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.110222] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.167392] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.128056] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.255042] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[ +17.537944] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[Dec25 13:27] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.124799] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [9990b54a38a74c715bd51f525d7161d7b0af1950763e5cc94a42cca414926d1e] <==
	{"level":"info","ts":"2023-12-25T13:27:04.337218Z","caller":"traceutil/trace.go:171","msg":"trace[318654554] linearizableReadLoop","detail":"{readStateIndex:530; appliedIndex:528; }","duration":"363.258535ms","start":"2023-12-25T13:27:03.973946Z","end":"2023-12-25T13:27:04.337205Z","steps":["trace[318654554] 'read index received'  (duration: 186.499377ms)","trace[318654554] 'applied index is now lower than readState.Index'  (duration: 176.756873ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-25T13:27:04.337346Z","caller":"traceutil/trace.go:171","msg":"trace[1059966004] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"364.57997ms","start":"2023-12-25T13:27:03.972756Z","end":"2023-12-25T13:27:04.337336Z","steps":["trace[1059966004] 'process raft request'  (duration: 364.37631ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:04.337456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.787696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-25T13:27:04.337575Z","caller":"traceutil/trace.go:171","msg":"trace[472828051] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:500; }","duration":"234.91054ms","start":"2023-12-25T13:27:04.102654Z","end":"2023-12-25T13:27:04.337565Z","steps":["trace[472828051] 'agreement among raft nodes before linearized reading'  (duration: 234.753193ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:04.33745Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.972742Z","time spent":"364.657592ms","remote":"127.0.0.1:51672","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2327,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/busybox\" mod_revision:427 > success:<request_put:<key:\"/registry/pods/default/busybox\" value_size:2289 >> failure:<request_range:<key:\"/registry/pods/default/busybox\" > >"}
	{"level":"warn","ts":"2023-12-25T13:27:04.337771Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.92335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" ","response":"range_response_count:1 size:840"}
	{"level":"info","ts":"2023-12-25T13:27:04.337846Z","caller":"traceutil/trace.go:171","msg":"trace[544063427] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:1; response_revision:500; }","duration":"363.9963ms","start":"2023-12-25T13:27:03.973836Z","end":"2023-12-25T13:27:04.337833Z","steps":["trace[544063427] 'agreement among raft nodes before linearized reading'  (duration: 363.885611ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:27:04.337947Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.973822Z","time spent":"364.117179ms","remote":"127.0.0.1:51708","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":863,"request content":"key:\"/registry/clusterroles/system:aggregate-to-admin\" "}
	{"level":"info","ts":"2023-12-25T13:27:04.337347Z","caller":"traceutil/trace.go:171","msg":"trace[61535417] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"369.108788ms","start":"2023-12-25T13:27:03.968227Z","end":"2023-12-25T13:27:04.337336Z","steps":["trace[61535417] 'process raft request'  (duration: 192.268032ms)","trace[61535417] 'compare'  (duration: 175.801035ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-25T13:27:04.338305Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:27:03.968208Z","time spent":"370.056333ms","remote":"127.0.0.1:51648","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":706,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/embed-certs-880612.17a4160f2a653693\" mod_revision:494 > success:<request_put:<key:\"/registry/events/default/embed-certs-880612.17a4160f2a653693\" value_size:628 lease:4899789543394873728 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-880612.17a4160f2a653693\" > >"}
	{"level":"info","ts":"2023-12-25T13:36:58.814408Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":850}
	{"level":"info","ts":"2023-12-25T13:36:58.817783Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":850,"took":"2.475828ms","hash":2274481468}
	{"level":"info","ts":"2023-12-25T13:36:58.817952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2274481468,"revision":850,"compact-revision":-1}
	{"level":"info","ts":"2023-12-25T13:41:58.822445Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1093}
	{"level":"info","ts":"2023-12-25T13:41:58.82452Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1093,"took":"1.710486ms","hash":274368564}
	{"level":"info","ts":"2023-12-25T13:41:58.824619Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":274368564,"revision":1093,"compact-revision":850}
	{"level":"info","ts":"2023-12-25T13:46:29.160515Z","caller":"traceutil/trace.go:171","msg":"trace[1653477712] transaction","detail":"{read_only:false; response_revision:1555; number_of_response:1; }","duration":"121.459839ms","start":"2023-12-25T13:46:29.038988Z","end":"2023-12-25T13:46:29.160447Z","steps":["trace[1653477712] 'process raft request'  (duration: 121.332942ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T13:46:58.880581Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1335}
	{"level":"info","ts":"2023-12-25T13:46:58.883557Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1335,"took":"2.542599ms","hash":1167433014}
	{"level":"info","ts":"2023-12-25T13:46:58.883643Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1167433014,"revision":1335,"compact-revision":1093}
	{"level":"info","ts":"2023-12-25T13:48:30.28074Z","caller":"traceutil/trace.go:171","msg":"trace[321197141] linearizableReadLoop","detail":"{readStateIndex:1962; appliedIndex:1961; }","duration":"298.018437ms","start":"2023-12-25T13:48:29.98266Z","end":"2023-12-25T13:48:30.280678Z","steps":["trace[321197141] 'read index received'  (duration: 297.850139ms)","trace[321197141] 'applied index is now lower than readState.Index'  (duration: 167.823µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-25T13:48:30.281224Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.470795ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-25T13:48:30.281268Z","caller":"traceutil/trace.go:171","msg":"trace[808844499] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1655; }","duration":"298.629224ms","start":"2023-12-25T13:48:29.982627Z","end":"2023-12-25T13:48:30.281256Z","steps":["trace[808844499] 'agreement among raft nodes before linearized reading'  (duration: 298.384287ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T13:48:30.281445Z","caller":"traceutil/trace.go:171","msg":"trace[1748956399] transaction","detail":"{read_only:false; response_revision:1655; number_of_response:1; }","duration":"324.779088ms","start":"2023-12-25T13:48:29.956654Z","end":"2023-12-25T13:48:30.281433Z","steps":["trace[1748956399] 'process raft request'  (duration: 323.951817ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:48:30.281617Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:48:29.956636Z","time spent":"324.871313ms","remote":"127.0.0.1:51668","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1654 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 13:49:33 up 23 min,  0 users,  load average: 0.19, 0.20, 0.13
	Linux embed-certs-880612 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [5ec3a53c74277e94f9969c85e4f838401497f3a0564390b3499e0ff267f5a6df] <==
	E1225 13:45:01.912769       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:45:01.912795       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:46:00.672794       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1225 13:47:00.672698       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1225 13:47:00.914376       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:47:00.914527       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:47:00.915352       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1225 13:47:01.915002       1 handler_proxy.go:93] no RequestInfo found in the context
	W1225 13:47:01.915044       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:47:01.915198       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:47:01.915206       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1225 13:47:01.915344       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:47:01.916670       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:48:00.673201       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1225 13:48:01.916075       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:48:01.916205       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:48:01.916219       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:48:01.917475       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:48:01.917644       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:48:01.917656       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:49:00.672522       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [e34911f64a889c474558a809c46d51dda7ce8bc38786e5152d99fdab8dc8b3c0] <==
	I1225 13:43:47.103608       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:44:16.586954       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:44:17.114379       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:44:46.592140       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:44:47.123155       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:45:16.598720       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:45:17.132219       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:45:46.606217       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:45:47.142276       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:46:16.612297       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:46:17.152146       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:46:46.619574       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:46:47.161770       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:47:16.626554       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:47:17.172231       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:47:46.634129       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:47:47.180997       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:48:16.640376       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:48:17.190056       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1225 13:48:27.497145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="436.692µs"
	I1225 13:48:42.491006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="201.277µs"
	E1225 13:48:46.646797       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:48:47.198588       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:49:16.653517       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:49:17.208386       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5a29e019e5e0da70e7c82956b16bbed904e22cb6f84ed819e026d991832053a6] <==
	I1225 13:27:04.777706       1 server_others.go:69] "Using iptables proxy"
	I1225 13:27:04.802056       1 node.go:141] Successfully retrieved node IP: 192.168.50.179
	I1225 13:27:04.924525       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1225 13:27:04.924724       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1225 13:27:04.929669       1 server_others.go:152] "Using iptables Proxier"
	I1225 13:27:04.929800       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1225 13:27:04.930667       1 server.go:846] "Version info" version="v1.28.4"
	I1225 13:27:04.931025       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 13:27:04.933503       1 config.go:188] "Starting service config controller"
	I1225 13:27:04.942159       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1225 13:27:04.934620       1 config.go:97] "Starting endpoint slice config controller"
	I1225 13:27:04.942303       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1225 13:27:04.938848       1 config.go:315] "Starting node config controller"
	I1225 13:27:04.942316       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1225 13:27:05.042958       1 shared_informer.go:318] Caches are synced for node config
	I1225 13:27:05.042993       1 shared_informer.go:318] Caches are synced for service config
	I1225 13:27:05.043006       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [868a5855738aece955742c54511535f0b8e3f7a5d93d579c16299c5961732480] <==
	I1225 13:26:57.534780       1 serving.go:348] Generated self-signed cert in-memory
	W1225 13:27:00.838983       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 13:27:00.839223       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 13:27:00.839239       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 13:27:00.839338       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 13:27:00.905625       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1225 13:27:00.905720       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 13:27:00.907977       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 13:27:00.908182       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1225 13:27:00.908550       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1225 13:27:00.908644       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1225 13:27:01.008609       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2023-12-25 13:26:25 UTC, ends at Mon 2023-12-25 13:49:33 UTC. --
	Dec 25 13:46:53 embed-certs-880612 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:46:53 embed-certs-880612 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:46:54 embed-certs-880612 kubelet[930]: E1225 13:46:54.472798     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:47:08 embed-certs-880612 kubelet[930]: E1225 13:47:08.473507     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:47:24 embed-certs-880612 kubelet[930]: E1225 13:47:24.473469     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:47:37 embed-certs-880612 kubelet[930]: E1225 13:47:37.474184     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:47:48 embed-certs-880612 kubelet[930]: E1225 13:47:48.474218     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:47:53 embed-certs-880612 kubelet[930]: E1225 13:47:53.489589     930 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:47:53 embed-certs-880612 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:47:53 embed-certs-880612 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:47:53 embed-certs-880612 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:47:59 embed-certs-880612 kubelet[930]: E1225 13:47:59.476707     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:48:13 embed-certs-880612 kubelet[930]: E1225 13:48:13.483390     930 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 25 13:48:13 embed-certs-880612 kubelet[930]: E1225 13:48:13.483434     930 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 25 13:48:13 embed-certs-880612 kubelet[930]: E1225 13:48:13.483636     930 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6s6q9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-chnh2_kube-system(5a0bb4ec-4652-4e5a-9da4-3ce126a4be11): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 25 13:48:13 embed-certs-880612 kubelet[930]: E1225 13:48:13.483683     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:48:27 embed-certs-880612 kubelet[930]: E1225 13:48:27.474540     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:48:42 embed-certs-880612 kubelet[930]: E1225 13:48:42.473475     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:48:53 embed-certs-880612 kubelet[930]: E1225 13:48:53.489562     930 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:48:53 embed-certs-880612 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:48:53 embed-certs-880612 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:48:53 embed-certs-880612 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:48:57 embed-certs-880612 kubelet[930]: E1225 13:48:57.475333     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:49:09 embed-certs-880612 kubelet[930]: E1225 13:49:09.473263     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	Dec 25 13:49:21 embed-certs-880612 kubelet[930]: E1225 13:49:21.474518     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-chnh2" podUID="5a0bb4ec-4652-4e5a-9da4-3ce126a4be11"
	
	
	==> storage-provisioner [03bfbdc74bd6af83943f7c1fc1a32e8d1aeef1c885d549bb847d21aa4d9377d7] <==
	I1225 13:27:04.472343       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 13:27:34.485394       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [0851cb5599abc7225036cb445a30c9d4d268867d2f7df6706846a9c6fcdb1751] <==
	I1225 13:27:34.889572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 13:27:34.906969       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 13:27:34.907226       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 13:27:52.325533       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 13:27:52.328200       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-880612_fedceb4c-3f9b-4180-b70b-44631a2bfe06!
	I1225 13:27:52.329607       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96e34e46-8347-4b63-a898-05e7a93d868f", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-880612_fedceb4c-3f9b-4180-b70b-44631a2bfe06 became leader
	I1225 13:27:52.428480       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-880612_fedceb4c-3f9b-4180-b70b-44631a2bfe06!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880612 -n embed-certs-880612
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-880612 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-chnh2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-880612 describe pod metrics-server-57f55c9bc5-chnh2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-880612 describe pod metrics-server-57f55c9bc5-chnh2: exit status 1 (71.466804ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-chnh2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-880612 describe pod metrics-server-57f55c9bc5-chnh2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (266.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1225 13:46:26.363400 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-25 13:50:24.399428804 +0000 UTC m=+5649.017905834
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-344803 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-344803 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.96µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-344803 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-344803 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-344803 logs -n 25: (1.209322716s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-712615 sudo systemctl                        | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:49 UTC | 25 Dec 23 13:49 UTC |
	|         | status kubelet --all --full                          |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo systemctl                        | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:49 UTC | 25 Dec 23 13:50 UTC |
	|         | cat kubelet --no-pager                               |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo journalctl                       | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | -xeu kubelet --all --full                            |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo cat                              | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo cat                              | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo systemctl                        | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC |                     |
	|         | status docker --all --full                           |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo systemctl                        | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | cat docker --no-pager                                |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo cat                              | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo docker                           | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo systemctl                        | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC |                     |
	|         | status cri-docker --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo systemctl                        | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | cat cri-docker --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo cat                              | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo cat                              | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo                                  | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo systemctl                        | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC |                     |
	|         | status containerd --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo systemctl                        | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | cat containerd --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo cat                              | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo cat                              | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo containerd                       | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | config dump                                          |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo systemctl                        | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | status crio --all --full                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo systemctl                        | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | cat crio --no-pager                                  |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo find                             | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p auto-712615 sudo crio                             | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p auto-712615                                       | auto-712615   | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC | 25 Dec 23 13:50 UTC |
	| start   | -p calico-712615 --memory=3072                       | calico-712615 | jenkins | v1.32.0 | 25 Dec 23 13:50 UTC |                     |
	|         | --alsologtostderr --wait=true                        |               |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |               |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |               |         |         |                     |                     |
	|         | --container-runtime=crio                             |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 13:50:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 13:50:06.616678 1491921 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:50:06.616914 1491921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:50:06.616928 1491921 out.go:309] Setting ErrFile to fd 2...
	I1225 13:50:06.616935 1491921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:50:06.617161 1491921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:50:06.617918 1491921 out.go:303] Setting JSON to false
	I1225 13:50:06.619285 1491921 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":160360,"bootTime":1703351847,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 13:50:06.619353 1491921 start.go:138] virtualization: kvm guest
	I1225 13:50:06.621611 1491921 out.go:177] * [calico-712615] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 13:50:06.624110 1491921 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 13:50:06.625567 1491921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 13:50:06.624212 1491921 notify.go:220] Checking for updates...
	I1225 13:50:06.627145 1491921 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:50:06.628493 1491921 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:50:06.629626 1491921 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 13:50:06.630931 1491921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 13:50:06.632604 1491921 config.go:182] Loaded profile config "default-k8s-diff-port-344803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:50:06.632726 1491921 config.go:182] Loaded profile config "kindnet-712615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:50:06.632825 1491921 config.go:182] Loaded profile config "newest-cni-058636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:50:06.632910 1491921 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 13:50:06.673671 1491921 out.go:177] * Using the kvm2 driver based on user configuration
	I1225 13:50:06.675066 1491921 start.go:298] selected driver: kvm2
	I1225 13:50:06.675093 1491921 start.go:902] validating driver "kvm2" against <nil>
	I1225 13:50:06.675111 1491921 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 13:50:06.675954 1491921 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:50:06.676047 1491921 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 13:50:06.692333 1491921 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 13:50:06.692399 1491921 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1225 13:50:06.692632 1491921 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 13:50:06.692706 1491921 cni.go:84] Creating CNI manager for "calico"
	I1225 13:50:06.692722 1491921 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I1225 13:50:06.692728 1491921 start_flags.go:323] config:
	{Name:calico-712615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-712615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 13:50:06.692912 1491921 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 13:50:06.695049 1491921 out.go:177] * Starting control plane node calico-712615 in cluster calico-712615
	I1225 13:50:07.406764 1489961 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.39:22: connect: no route to host
	I1225 13:50:10.482659 1489961 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.39:22: connect: no route to host
	I1225 13:50:06.696354 1491921 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1225 13:50:06.696449 1491921 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1225 13:50:06.696469 1491921 cache.go:56] Caching tarball of preloaded images
	I1225 13:50:06.696625 1491921 preload.go:174] Found /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 13:50:06.696662 1491921 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1225 13:50:06.696807 1491921 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/calico-712615/config.json ...
	I1225 13:50:06.696838 1491921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/calico-712615/config.json: {Name:mk6a8fd338d515ffe4da4e2990d39aef8ddf3045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 13:50:06.697046 1491921 start.go:365] acquiring machines lock for calico-712615: {Name:mk4dc348fa14145abcb0ff1cc4db8becfa141635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1225 13:50:16.558707 1489961 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.39:22: connect: no route to host
	I1225 13:50:19.630800 1489961 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.39:22: connect: no route to host
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2023-12-25 13:26:47 UTC, ends at Mon 2023-12-25 13:50:25 UTC. --
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.094381088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703512225094366669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=07c941fc-71ac-4fac-a668-ed8cf0923d2a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.094865138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=de829163-587e-400e-80ac-f7c377a858ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.094909189Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=de829163-587e-400e-80ac-f7c377a858ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.095083829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd,PodSandboxId:d9c7957bb4ca05cd792cbe341c6e150fb14235c38f384ab790a5a7793124dbdd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703511139216879951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rbmbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5fc3c3-b9db-437d-8088-2f97921bc3bd,},Annotations:map[string]string{io.kubernetes.container.hash: f747fa4c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8,PodSandboxId:503e06ebad5c6da718ca5ba4ec8e29eeaf998c369c77b2e1e4530a8c2ddd66f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703511138272988202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bee5e6e-1252-4b3d-8d6c-73515d8567e4,},Annotations:map[string]string{io.kubernetes.container.hash: d8899048,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3,PodSandboxId:9c7da8fea5ac3926cb08a46632877e4c34dac5fec5ee662ad1b17a3c28f02278,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703511136076759995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpk9s,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 17d80ffc-e149-4449-aec9-9d90a2fda282,},Annotations:map[string]string{io.kubernetes.container.hash: 3f77eaca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f,PodSandboxId:e8f110c9e64aecfa3b772d71cb50a6ad6fbbb5167f97de00eaca86dba8fdb988,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703511113805422123,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b7e97da25bd859e
90fc4d0314838a3,},Annotations:map[string]string{io.kubernetes.container.hash: d4ad95f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13,PodSandboxId:25e3b9339d0ba517f676e988826d242007f921073cab46a69b40994baf0c2937,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703511113630520912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b89558a0ee692b524
5a29c7aab9ef729,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2,PodSandboxId:b1248b21fb07a5ef19ab976d8766c2e8fccb3fdad02fb708b5e3b58698d95c65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703511113604668279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 407e2c1ffda0cd91d0675f36c34b3336,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca,PodSandboxId:26dff8002b28995298b9ebfda1cdeba46e5bce63389fdf2934b8f6a9604e844f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703511113472392376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 77930059fbde809ec88a6de735f03c86,},Annotations:map[string]string{io.kubernetes.container.hash: 8951b72a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=de829163-587e-400e-80ac-f7c377a858ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.136109790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9f13ab3f-bfaf-4c78-afff-a4363c952c8b name=/runtime.v1.RuntimeService/Version
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.136224867Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9f13ab3f-bfaf-4c78-afff-a4363c952c8b name=/runtime.v1.RuntimeService/Version
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.137658195Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=67462943-a7c1-4048-82f6-bd9e4ede8cdb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.138014364Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703512225138001820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=67462943-a7c1-4048-82f6-bd9e4ede8cdb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.138835566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0880b67e-6da9-49f2-ac8b-5fc1e88bb683 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.138884811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0880b67e-6da9-49f2-ac8b-5fc1e88bb683 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.139047696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd,PodSandboxId:d9c7957bb4ca05cd792cbe341c6e150fb14235c38f384ab790a5a7793124dbdd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703511139216879951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rbmbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5fc3c3-b9db-437d-8088-2f97921bc3bd,},Annotations:map[string]string{io.kubernetes.container.hash: f747fa4c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8,PodSandboxId:503e06ebad5c6da718ca5ba4ec8e29eeaf998c369c77b2e1e4530a8c2ddd66f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703511138272988202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bee5e6e-1252-4b3d-8d6c-73515d8567e4,},Annotations:map[string]string{io.kubernetes.container.hash: d8899048,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3,PodSandboxId:9c7da8fea5ac3926cb08a46632877e4c34dac5fec5ee662ad1b17a3c28f02278,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703511136076759995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpk9s,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 17d80ffc-e149-4449-aec9-9d90a2fda282,},Annotations:map[string]string{io.kubernetes.container.hash: 3f77eaca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f,PodSandboxId:e8f110c9e64aecfa3b772d71cb50a6ad6fbbb5167f97de00eaca86dba8fdb988,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703511113805422123,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b7e97da25bd859e
90fc4d0314838a3,},Annotations:map[string]string{io.kubernetes.container.hash: d4ad95f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13,PodSandboxId:25e3b9339d0ba517f676e988826d242007f921073cab46a69b40994baf0c2937,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703511113630520912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b89558a0ee692b524
5a29c7aab9ef729,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2,PodSandboxId:b1248b21fb07a5ef19ab976d8766c2e8fccb3fdad02fb708b5e3b58698d95c65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703511113604668279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 407e2c1ffda0cd91d0675f36c34b3336,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca,PodSandboxId:26dff8002b28995298b9ebfda1cdeba46e5bce63389fdf2934b8f6a9604e844f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703511113472392376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 77930059fbde809ec88a6de735f03c86,},Annotations:map[string]string{io.kubernetes.container.hash: 8951b72a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0880b67e-6da9-49f2-ac8b-5fc1e88bb683 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.179457069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5d6a8d66-4ac3-4701-b362-96ae83be34bd name=/runtime.v1.RuntimeService/Version
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.179517528Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5d6a8d66-4ac3-4701-b362-96ae83be34bd name=/runtime.v1.RuntimeService/Version
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.180708155Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=69e87614-ba14-4a6c-8927-47d9bc641241 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.181281249Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703512225181158464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=69e87614-ba14-4a6c-8927-47d9bc641241 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.182025230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cf3c00d7-d98e-4298-8b53-5e3591f2f71d name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.182094397Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cf3c00d7-d98e-4298-8b53-5e3591f2f71d name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.182329393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd,PodSandboxId:d9c7957bb4ca05cd792cbe341c6e150fb14235c38f384ab790a5a7793124dbdd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703511139216879951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rbmbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5fc3c3-b9db-437d-8088-2f97921bc3bd,},Annotations:map[string]string{io.kubernetes.container.hash: f747fa4c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8,PodSandboxId:503e06ebad5c6da718ca5ba4ec8e29eeaf998c369c77b2e1e4530a8c2ddd66f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703511138272988202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bee5e6e-1252-4b3d-8d6c-73515d8567e4,},Annotations:map[string]string{io.kubernetes.container.hash: d8899048,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3,PodSandboxId:9c7da8fea5ac3926cb08a46632877e4c34dac5fec5ee662ad1b17a3c28f02278,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703511136076759995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpk9s,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 17d80ffc-e149-4449-aec9-9d90a2fda282,},Annotations:map[string]string{io.kubernetes.container.hash: 3f77eaca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f,PodSandboxId:e8f110c9e64aecfa3b772d71cb50a6ad6fbbb5167f97de00eaca86dba8fdb988,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703511113805422123,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b7e97da25bd859e
90fc4d0314838a3,},Annotations:map[string]string{io.kubernetes.container.hash: d4ad95f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13,PodSandboxId:25e3b9339d0ba517f676e988826d242007f921073cab46a69b40994baf0c2937,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703511113630520912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b89558a0ee692b524
5a29c7aab9ef729,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2,PodSandboxId:b1248b21fb07a5ef19ab976d8766c2e8fccb3fdad02fb708b5e3b58698d95c65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703511113604668279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 407e2c1ffda0cd91d0675f36c34b3336,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca,PodSandboxId:26dff8002b28995298b9ebfda1cdeba46e5bce63389fdf2934b8f6a9604e844f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703511113472392376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 77930059fbde809ec88a6de735f03c86,},Annotations:map[string]string{io.kubernetes.container.hash: 8951b72a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cf3c00d7-d98e-4298-8b53-5e3591f2f71d name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.217821333Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b8177583-4e0b-4bde-be12-757f876f2b9f name=/runtime.v1.RuntimeService/Version
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.217881725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b8177583-4e0b-4bde-be12-757f876f2b9f name=/runtime.v1.RuntimeService/Version
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.224878403Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d04ac7cf-7627-4d8b-89e4-c1435f9a377a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.225362440Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1703512225225347205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d04ac7cf-7627-4d8b-89e4-c1435f9a377a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.226051760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bdfdbbfc-b565-41d6-8ffb-7be062c1dfa7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.226102875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bdfdbbfc-b565-41d6-8ffb-7be062c1dfa7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 25 13:50:25 default-k8s-diff-port-344803 crio[723]: time="2023-12-25 13:50:25.226342547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd,PodSandboxId:d9c7957bb4ca05cd792cbe341c6e150fb14235c38f384ab790a5a7793124dbdd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1703511139216879951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rbmbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5fc3c3-b9db-437d-8088-2f97921bc3bd,},Annotations:map[string]string{io.kubernetes.container.hash: f747fa4c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8,PodSandboxId:503e06ebad5c6da718ca5ba4ec8e29eeaf998c369c77b2e1e4530a8c2ddd66f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1703511138272988202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bee5e6e-1252-4b3d-8d6c-73515d8567e4,},Annotations:map[string]string{io.kubernetes.container.hash: d8899048,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3,PodSandboxId:9c7da8fea5ac3926cb08a46632877e4c34dac5fec5ee662ad1b17a3c28f02278,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1703511136076759995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpk9s,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 17d80ffc-e149-4449-aec9-9d90a2fda282,},Annotations:map[string]string{io.kubernetes.container.hash: 3f77eaca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f,PodSandboxId:e8f110c9e64aecfa3b772d71cb50a6ad6fbbb5167f97de00eaca86dba8fdb988,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1703511113805422123,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b7e97da25bd859e
90fc4d0314838a3,},Annotations:map[string]string{io.kubernetes.container.hash: d4ad95f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13,PodSandboxId:25e3b9339d0ba517f676e988826d242007f921073cab46a69b40994baf0c2937,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1703511113630520912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b89558a0ee692b524
5a29c7aab9ef729,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2,PodSandboxId:b1248b21fb07a5ef19ab976d8766c2e8fccb3fdad02fb708b5e3b58698d95c65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1703511113604668279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 407e2c1ffda0cd91d0675f36c34b3336,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca,PodSandboxId:26dff8002b28995298b9ebfda1cdeba46e5bce63389fdf2934b8f6a9604e844f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1703511113472392376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-344803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 77930059fbde809ec88a6de735f03c86,},Annotations:map[string]string{io.kubernetes.container.hash: 8951b72a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bdfdbbfc-b565-41d6-8ffb-7be062c1dfa7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	667f9290ab9fd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 minutes ago      Running             coredns                   0                   d9c7957bb4ca0       coredns-5dd5756b68-rbmbs
	2752dc28afbf4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   503e06ebad5c6       storage-provisioner
	09edd8162e2b7       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   18 minutes ago      Running             kube-proxy                0                   9c7da8fea5ac3       kube-proxy-fpk9s
	94e27fadf048b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   18 minutes ago      Running             etcd                      2                   e8f110c9e64ae       etcd-default-k8s-diff-port-344803
	935f1c4836b96       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   18 minutes ago      Running             kube-scheduler            2                   25e3b9339d0ba       kube-scheduler-default-k8s-diff-port-344803
	3670e177c122b       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   18 minutes ago      Running             kube-controller-manager   2                   b1248b21fb07a       kube-controller-manager-default-k8s-diff-port-344803
	3e5f34c8c4093       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   18 minutes ago      Running             kube-apiserver            2                   26dff8002b289       kube-apiserver-default-k8s-diff-port-344803
	
	
	==> coredns [667f9290ab9fdc39d617ec2a80c559c0656d2fc7eced9a8b0924780e6230aefd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37798 - 47913 "HINFO IN 8929664785579530971.855764544156376687. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009378709s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-344803
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-344803
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f8b637745f32b0b89b0ea392bb3c31ae7b3b68da
	                    minikube.k8s.io/name=default-k8s-diff-port-344803
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_25T13_32_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Dec 2023 13:31:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-344803
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Dec 2023 13:50:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Dec 2023 13:47:40 +0000   Mon, 25 Dec 2023 13:31:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Dec 2023 13:47:40 +0000   Mon, 25 Dec 2023 13:31:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Dec 2023 13:47:40 +0000   Mon, 25 Dec 2023 13:31:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Dec 2023 13:47:40 +0000   Mon, 25 Dec 2023 13:32:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.39
	  Hostname:    default-k8s-diff-port-344803
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9137c9b00b9640de913c0f6607cb361e
	  System UUID:                9137c9b0-0b96-40de-913c-0f6607cb361e
	  Boot ID:                    d79c15c2-2217-406f-8530-049b2957669c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-rbmbs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-default-k8s-diff-port-344803                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-default-k8s-diff-port-344803             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-344803    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-fpk9s                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-344803             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-57f55c9bc5-slv7p                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node default-k8s-diff-port-344803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node default-k8s-diff-port-344803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node default-k8s-diff-port-344803 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             18m   kubelet          Node default-k8s-diff-port-344803 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                18m   kubelet          Node default-k8s-diff-port-344803 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node default-k8s-diff-port-344803 event: Registered Node default-k8s-diff-port-344803 in Controller
	
	
	==> dmesg <==
	[Dec25 13:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071193] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.541549] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.651379] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.155750] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.510662] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.115514] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.187838] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.161562] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.179009] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.311726] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[Dec25 13:27] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[ +14.521579] kauditd_printk_skb: 19 callbacks suppressed
	[Dec25 13:31] systemd-fstab-generator[3520]: Ignoring "noauto" for root device
	[Dec25 13:32] systemd-fstab-generator[3844]: Ignoring "noauto" for root device
	[ +16.115239] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [94e27fadf048befb8c8074498e4a7f5dc5f82ce788cb4bf0c41a70d8b9694d2f] <==
	{"level":"info","ts":"2023-12-25T13:31:56.752749Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-25T13:31:56.753036Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-25T13:41:56.790256Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2023-12-25T13:41:56.793644Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":677,"took":"2.537002ms","hash":3125984727}
	{"level":"info","ts":"2023-12-25T13:41:56.793769Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3125984727,"revision":677,"compact-revision":-1}
	{"level":"info","ts":"2023-12-25T13:46:16.174795Z","caller":"traceutil/trace.go:171","msg":"trace[982433258] linearizableReadLoop","detail":"{readStateIndex:1319; appliedIndex:1318; }","duration":"151.732992ms","start":"2023-12-25T13:46:16.022994Z","end":"2023-12-25T13:46:16.174727Z","steps":["trace[982433258] 'read index received'  (duration: 86.965033ms)","trace[982433258] 'applied index is now lower than readState.Index'  (duration: 64.766761ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-25T13:46:16.175224Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.065025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-25T13:46:16.175278Z","caller":"traceutil/trace.go:171","msg":"trace[1834661411] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1131; }","duration":"152.295663ms","start":"2023-12-25T13:46:16.022969Z","end":"2023-12-25T13:46:16.175265Z","steps":["trace[1834661411] 'agreement among raft nodes before linearized reading'  (duration: 152.005346ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T13:46:16.175527Z","caller":"traceutil/trace.go:171","msg":"trace[1437742418] transaction","detail":"{read_only:false; response_revision:1131; number_of_response:1; }","duration":"256.39956ms","start":"2023-12-25T13:46:15.919104Z","end":"2023-12-25T13:46:16.175504Z","steps":["trace[1437742418] 'process raft request'  (duration: 190.900646ms)","trace[1437742418] 'compare'  (duration: 64.394039ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-25T13:46:30.153981Z","caller":"traceutil/trace.go:171","msg":"trace[537729215] linearizableReadLoop","detail":"{readStateIndex:1331; appliedIndex:1330; }","duration":"159.092205ms","start":"2023-12-25T13:46:29.994844Z","end":"2023-12-25T13:46:30.153936Z","steps":["trace[537729215] 'read index received'  (duration: 158.93189ms)","trace[537729215] 'applied index is now lower than readState.Index'  (duration: 159.236µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-25T13:46:30.15429Z","caller":"traceutil/trace.go:171","msg":"trace[1639489364] transaction","detail":"{read_only:false; response_revision:1141; number_of_response:1; }","duration":"204.791524ms","start":"2023-12-25T13:46:29.949476Z","end":"2023-12-25T13:46:30.154267Z","steps":["trace[1639489364] 'process raft request'  (duration: 204.313956ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:46:30.154558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.35216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-25T13:46:30.154642Z","caller":"traceutil/trace.go:171","msg":"trace[161845086] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1141; }","duration":"128.480483ms","start":"2023-12-25T13:46:30.026151Z","end":"2023-12-25T13:46:30.154631Z","steps":["trace[161845086] 'agreement among raft nodes before linearized reading'  (duration: 128.326274ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:46:30.154658Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.816909ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-25T13:46:30.154967Z","caller":"traceutil/trace.go:171","msg":"trace[1904851560] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:1141; }","duration":"160.141324ms","start":"2023-12-25T13:46:29.994813Z","end":"2023-12-25T13:46:30.154955Z","steps":["trace[1904851560] 'agreement among raft nodes before linearized reading'  (duration: 159.799573ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-25T13:46:56.799605Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":920}
	{"level":"info","ts":"2023-12-25T13:46:56.802868Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":920,"took":"2.928291ms","hash":645839125}
	{"level":"info","ts":"2023-12-25T13:46:56.80295Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":645839125,"revision":920,"compact-revision":677}
	{"level":"info","ts":"2023-12-25T13:48:30.581486Z","caller":"traceutil/trace.go:171","msg":"trace[1588124792] transaction","detail":"{read_only:false; response_revision:1240; number_of_response:1; }","duration":"153.913699ms","start":"2023-12-25T13:48:30.427513Z","end":"2023-12-25T13:48:30.581427Z","steps":["trace[1588124792] 'process raft request'  (duration: 105.317672ms)","trace[1588124792] 'compare'  (duration: 48.30273ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-25T13:48:31.392682Z","caller":"traceutil/trace.go:171","msg":"trace[1016937933] linearizableReadLoop","detail":"{readStateIndex:1458; appliedIndex:1457; }","duration":"365.880946ms","start":"2023-12-25T13:48:31.026783Z","end":"2023-12-25T13:48:31.392664Z","steps":["trace[1016937933] 'read index received'  (duration: 365.582345ms)","trace[1016937933] 'applied index is now lower than readState.Index'  (duration: 298.021µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-25T13:48:31.392957Z","caller":"traceutil/trace.go:171","msg":"trace[2049315893] transaction","detail":"{read_only:false; response_revision:1241; number_of_response:1; }","duration":"406.190158ms","start":"2023-12-25T13:48:30.986745Z","end":"2023-12-25T13:48:31.392935Z","steps":["trace[2049315893] 'process raft request'  (duration: 405.690749ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:48:31.393871Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:48:30.986724Z","time spent":"406.308315ms","remote":"127.0.0.1:42642","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1239 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-12-25T13:48:31.394369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"366.192587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-25T13:48:31.394441Z","caller":"traceutil/trace.go:171","msg":"trace[1518351956] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1241; }","duration":"367.67235ms","start":"2023-12-25T13:48:31.026752Z","end":"2023-12-25T13:48:31.394424Z","steps":["trace[1518351956] 'agreement among raft nodes before linearized reading'  (duration: 366.159924ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-25T13:48:31.394489Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-25T13:48:31.026734Z","time spent":"367.74543ms","remote":"127.0.0.1:42646","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	
	
	==> kernel <==
	 13:50:25 up 23 min,  0 users,  load average: 0.13, 0.15, 0.19
	Linux default-k8s-diff-port-344803 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3e5f34c8c4093b8018d043c2c63c390db4acb6ed138a19dfc388733be2c2bfca] <==
	I1225 13:46:58.391529       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1225 13:46:59.391498       1 handler_proxy.go:93] no RequestInfo found in the context
	W1225 13:46:59.391573       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:46:59.391824       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:46:59.391861       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1225 13:46:59.391995       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:46:59.393986       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:47:58.264408       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1225 13:47:59.392912       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:47:59.393035       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:47:59.393068       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:47:59.394262       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:47:59.394387       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:47:59.394420       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1225 13:48:58.264019       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1225 13:49:58.264255       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1225 13:49:59.393463       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:49:59.393568       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1225 13:49:59.393615       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1225 13:49:59.394678       1 handler_proxy.go:93] no RequestInfo found in the context
	E1225 13:49:59.394880       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1225 13:49:59.394922       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3670e177c122b27938a39b7a5d4298db6cdb186fb896a0418a8b025cf734c6c2] <==
	I1225 13:44:45.404414       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:45:14.869017       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:45:15.415728       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:45:44.879682       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:45:45.425881       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:46:14.886673       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:46:15.435424       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:46:44.893239       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:46:45.446035       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:47:14.899381       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:47:15.456140       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:47:44.906107       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:47:45.471878       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:48:14.913525       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:48:15.482797       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1225 13:48:33.007558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="498.258µs"
	E1225 13:48:44.920296       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:48:45.494606       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1225 13:48:47.002622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="151.457µs"
	E1225 13:49:14.926907       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:49:15.503538       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:49:44.935980       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:49:45.512685       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1225 13:50:14.946333       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1225 13:50:15.522770       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [09edd8162e2b7c58019b8381da8d9dd5dd0b5c8758f161c40a22360e4fe533b3] <==
	I1225 13:32:17.091391       1 server_others.go:69] "Using iptables proxy"
	I1225 13:32:17.160730       1 node.go:141] Successfully retrieved node IP: 192.168.61.39
	I1225 13:32:17.336101       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1225 13:32:17.336148       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1225 13:32:17.356414       1 server_others.go:152] "Using iptables Proxier"
	I1225 13:32:17.356687       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1225 13:32:17.357977       1 server.go:846] "Version info" version="v1.28.4"
	I1225 13:32:17.358068       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 13:32:17.368672       1 config.go:188] "Starting service config controller"
	I1225 13:32:17.370037       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1225 13:32:17.370229       1 config.go:315] "Starting node config controller"
	I1225 13:32:17.370263       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1225 13:32:17.370911       1 config.go:97] "Starting endpoint slice config controller"
	I1225 13:32:17.370943       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1225 13:32:17.522012       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1225 13:32:17.522123       1 shared_informer.go:318] Caches are synced for node config
	I1225 13:32:17.522134       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [935f1c4836b96e1df6d91963e579ba210e98fb8cb04b365dac40e5d038693f13] <==
	W1225 13:31:58.453466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1225 13:31:58.453523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1225 13:31:58.453596       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1225 13:31:58.453891       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1225 13:31:58.453664       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1225 13:31:58.453945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1225 13:31:58.453714       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1225 13:31:58.453996       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1225 13:31:58.453760       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1225 13:31:58.455960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1225 13:31:59.346158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1225 13:31:59.346262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1225 13:31:59.353969       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1225 13:31:59.354076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1225 13:31:59.376741       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1225 13:31:59.376838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1225 13:31:59.399976       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1225 13:31:59.400093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1225 13:31:59.432044       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1225 13:31:59.432223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1225 13:31:59.665977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1225 13:31:59.666065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1225 13:31:59.750926       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1225 13:31:59.751032       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1225 13:32:02.122491       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2023-12-25 13:26:47 UTC, ends at Mon 2023-12-25 13:50:25 UTC. --
	Dec 25 13:48:02 default-k8s-diff-port-344803 kubelet[3851]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:48:02 default-k8s-diff-port-344803 kubelet[3851]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:48:02 default-k8s-diff-port-344803 kubelet[3851]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:48:08 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:48:08.986090    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:48:21 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:48:20.999239    3851 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 25 13:48:21 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:48:20.999302    3851 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 25 13:48:21 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:48:20.999576    3851 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nfw56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-slv7p_kube-system(a51c534d-e6d8-48b9-852f-caf598c8853a): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 25 13:48:21 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:48:20.999641    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:48:32 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:48:32.985998    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:48:46 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:48:46.985653    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:49:00 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:49:00.986373    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:49:02 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:49:02.077929    3851 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:49:02 default-k8s-diff-port-344803 kubelet[3851]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:49:02 default-k8s-diff-port-344803 kubelet[3851]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:49:02 default-k8s-diff-port-344803 kubelet[3851]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:49:12 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:49:12.985741    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:49:27 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:49:27.986711    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:49:42 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:49:42.985642    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:49:57 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:49:57.985483    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:50:02 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:50:02.078991    3851 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 25 13:50:02 default-k8s-diff-port-344803 kubelet[3851]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 25 13:50:02 default-k8s-diff-port-344803 kubelet[3851]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 25 13:50:02 default-k8s-diff-port-344803 kubelet[3851]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 25 13:50:09 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:50:09.986385    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	Dec 25 13:50:24 default-k8s-diff-port-344803 kubelet[3851]: E1225 13:50:24.985801    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-slv7p" podUID="a51c534d-e6d8-48b9-852f-caf598c8853a"
	
	
	==> storage-provisioner [2752dc28afbf401478c57ba8e30751979a1aa8205ddcc71523933eb6c178a9b8] <==
	I1225 13:32:18.522452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 13:32:18.542368       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 13:32:18.543402       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 13:32:18.596645       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 13:32:18.596884       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-344803_3cc37642-73cd-4599-8ab9-70d46378544a!
	I1225 13:32:18.621502       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a2abd788-7c74-4c41-8745-bad346f1dad2", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-344803_3cc37642-73cd-4599-8ab9-70d46378544a became leader
	I1225 13:32:18.698080       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-344803_3cc37642-73cd-4599-8ab9-70d46378544a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-344803 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-slv7p
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-344803 describe pod metrics-server-57f55c9bc5-slv7p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-344803 describe pod metrics-server-57f55c9bc5-slv7p: exit status 1 (65.942788ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-slv7p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-344803 describe pod metrics-server-57f55c9bc5-slv7p: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (266.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (139.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-058636 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p newest-cni-058636 --alsologtostderr -v=3: exit status 82 (2m1.284097853s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-058636"  ...
	* Stopping node "newest-cni-058636"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 13:46:58.691770 1489025 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:46:58.691971 1489025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:46:58.691982 1489025 out.go:309] Setting ErrFile to fd 2...
	I1225 13:46:58.691987 1489025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:46:58.692253 1489025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:46:58.692561 1489025 out.go:303] Setting JSON to false
	I1225 13:46:58.692677 1489025 mustload.go:65] Loading cluster: newest-cni-058636
	I1225 13:46:58.693197 1489025 config.go:182] Loaded profile config "newest-cni-058636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:46:58.693292 1489025 profile.go:148] Saving config to /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/newest-cni-058636/config.json ...
	I1225 13:46:58.693467 1489025 mustload.go:65] Loading cluster: newest-cni-058636
	I1225 13:46:58.693591 1489025 config.go:182] Loaded profile config "newest-cni-058636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1225 13:46:58.693624 1489025 stop.go:39] StopHost: newest-cni-058636
	I1225 13:46:58.694016 1489025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:46:58.694075 1489025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:46:58.710599 1489025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39733
	I1225 13:46:58.711174 1489025 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:46:58.711959 1489025 main.go:141] libmachine: Using API Version  1
	I1225 13:46:58.711996 1489025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:46:58.712398 1489025 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:46:58.715289 1489025 out.go:177] * Stopping node "newest-cni-058636"  ...
	I1225 13:46:58.716941 1489025 main.go:141] libmachine: Stopping "newest-cni-058636"...
	I1225 13:46:58.716982 1489025 main.go:141] libmachine: (newest-cni-058636) Calling .GetState
	I1225 13:46:58.719079 1489025 main.go:141] libmachine: (newest-cni-058636) Calling .Stop
	I1225 13:46:58.723128 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 0/60
	I1225 13:46:59.725287 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 1/60
	I1225 13:47:00.727286 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 2/60
	I1225 13:47:01.729732 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 3/60
	I1225 13:47:02.731761 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 4/60
	I1225 13:47:03.733890 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 5/60
	I1225 13:47:04.735424 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 6/60
	I1225 13:47:05.737064 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 7/60
	I1225 13:47:06.738464 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 8/60
	I1225 13:47:07.739867 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 9/60
	I1225 13:47:08.741444 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 10/60
	I1225 13:47:09.742925 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 11/60
	I1225 13:47:10.745185 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 12/60
	I1225 13:47:11.747486 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 13/60
	I1225 13:47:12.749074 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 14/60
	I1225 13:47:13.751041 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 15/60
	I1225 13:47:14.753133 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 16/60
	I1225 13:47:15.754638 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 17/60
	I1225 13:47:16.756945 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 18/60
	I1225 13:47:17.758492 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 19/60
	I1225 13:47:18.760541 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 20/60
	I1225 13:47:19.762084 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 21/60
	I1225 13:47:20.763644 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 22/60
	I1225 13:47:21.765042 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 23/60
	I1225 13:47:22.766753 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 24/60
	I1225 13:47:23.769023 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 25/60
	I1225 13:47:24.770508 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 26/60
	I1225 13:47:25.772142 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 27/60
	I1225 13:47:26.773635 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 28/60
	I1225 13:47:27.775591 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 29/60
	I1225 13:47:28.777749 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 30/60
	I1225 13:47:29.779310 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 31/60
	I1225 13:47:30.781821 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 32/60
	I1225 13:47:31.783397 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 33/60
	I1225 13:47:32.784906 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 34/60
	I1225 13:47:33.786991 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 35/60
	I1225 13:47:34.789548 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 36/60
	I1225 13:47:35.790982 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 37/60
	I1225 13:47:36.793130 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 38/60
	I1225 13:47:37.794689 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 39/60
	I1225 13:47:38.797191 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 40/60
	I1225 13:47:39.799416 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 41/60
	I1225 13:47:40.800985 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 42/60
	I1225 13:47:41.803024 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 43/60
	I1225 13:47:42.805194 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 44/60
	I1225 13:47:43.807374 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 45/60
	I1225 13:47:44.808807 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 46/60
	I1225 13:47:45.810641 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 47/60
	I1225 13:47:46.812905 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 48/60
	I1225 13:47:47.814531 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 49/60
	I1225 13:47:48.816806 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 50/60
	I1225 13:47:49.818658 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 51/60
	I1225 13:47:50.821145 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 52/60
	I1225 13:47:51.822994 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 53/60
	I1225 13:47:52.825341 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 54/60
	I1225 13:47:53.827517 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 55/60
	I1225 13:47:54.829247 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 56/60
	I1225 13:47:55.831006 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 57/60
	I1225 13:47:56.833108 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 58/60
	I1225 13:47:57.834847 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 59/60
	I1225 13:47:58.836060 1489025 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1225 13:47:58.836134 1489025 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:47:58.836161 1489025 retry.go:31] will retry after 911.66012ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:47:59.748397 1489025 stop.go:39] StopHost: newest-cni-058636
	I1225 13:47:59.748939 1489025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 13:47:59.749006 1489025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 13:47:59.765017 1489025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
	I1225 13:47:59.765578 1489025 main.go:141] libmachine: () Calling .GetVersion
	I1225 13:47:59.766184 1489025 main.go:141] libmachine: Using API Version  1
	I1225 13:47:59.766205 1489025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 13:47:59.766571 1489025 main.go:141] libmachine: () Calling .GetMachineName
	I1225 13:47:59.768675 1489025 out.go:177] * Stopping node "newest-cni-058636"  ...
	I1225 13:47:59.770085 1489025 main.go:141] libmachine: Stopping "newest-cni-058636"...
	I1225 13:47:59.770112 1489025 main.go:141] libmachine: (newest-cni-058636) Calling .GetState
	I1225 13:47:59.772110 1489025 main.go:141] libmachine: (newest-cni-058636) Calling .Stop
	I1225 13:47:59.776174 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 0/60
	I1225 13:48:00.777675 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 1/60
	I1225 13:48:01.779079 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 2/60
	I1225 13:48:02.781011 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 3/60
	I1225 13:48:03.783448 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 4/60
	I1225 13:48:04.785123 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 5/60
	I1225 13:48:05.787094 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 6/60
	I1225 13:48:06.788636 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 7/60
	I1225 13:48:07.790117 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 8/60
	I1225 13:48:08.791728 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 9/60
	I1225 13:48:09.794050 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 10/60
	I1225 13:48:10.796184 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 11/60
	I1225 13:48:11.797572 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 12/60
	I1225 13:48:12.799065 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 13/60
	I1225 13:48:13.800987 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 14/60
	I1225 13:48:14.802928 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 15/60
	I1225 13:48:15.805462 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 16/60
	I1225 13:48:16.807444 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 17/60
	I1225 13:48:17.809086 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 18/60
	I1225 13:48:18.810793 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 19/60
	I1225 13:48:19.812809 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 20/60
	I1225 13:48:20.814221 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 21/60
	I1225 13:48:21.815699 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 22/60
	I1225 13:48:22.817413 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 23/60
	I1225 13:48:23.819134 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 24/60
	I1225 13:48:24.821502 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 25/60
	I1225 13:48:25.823968 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 26/60
	I1225 13:48:26.825685 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 27/60
	I1225 13:48:27.827142 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 28/60
	I1225 13:48:28.829642 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 29/60
	I1225 13:48:29.832007 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 30/60
	I1225 13:48:30.833511 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 31/60
	I1225 13:48:31.835208 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 32/60
	I1225 13:48:32.837459 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 33/60
	I1225 13:48:33.839112 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 34/60
	I1225 13:48:34.841438 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 35/60
	I1225 13:48:35.843187 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 36/60
	I1225 13:48:36.845306 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 37/60
	I1225 13:48:37.846798 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 38/60
	I1225 13:48:38.848261 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 39/60
	I1225 13:48:39.849913 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 40/60
	I1225 13:48:40.851418 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 41/60
	I1225 13:48:41.853057 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 42/60
	I1225 13:48:42.854481 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 43/60
	I1225 13:48:43.856001 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 44/60
	I1225 13:48:44.857981 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 45/60
	I1225 13:48:45.859567 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 46/60
	I1225 13:48:46.861198 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 47/60
	I1225 13:48:47.862816 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 48/60
	I1225 13:48:48.865173 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 49/60
	I1225 13:48:49.867344 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 50/60
	I1225 13:48:50.869391 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 51/60
	I1225 13:48:51.870897 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 52/60
	I1225 13:48:52.873411 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 53/60
	I1225 13:48:53.875010 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 54/60
	I1225 13:48:54.876773 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 55/60
	I1225 13:48:55.878509 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 56/60
	I1225 13:48:56.880632 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 57/60
	I1225 13:48:57.882965 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 58/60
	I1225 13:48:58.885225 1489025 main.go:141] libmachine: (newest-cni-058636) Waiting for machine to stop 59/60
	I1225 13:48:59.885772 1489025 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1225 13:48:59.885844 1489025 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1225 13:48:59.888058 1489025 out.go:177] 
	W1225 13:48:59.889576 1489025 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1225 13:48:59.889597 1489025 out.go:239] * 
	* 
	W1225 13:48:59.903189 1489025 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 13:48:59.904864 1489025 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p newest-cni-058636 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-058636 -n newest-cni-058636
E1225 13:49:07.347784 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-058636 -n newest-cni-058636: exit status 3 (18.478997224s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:49:18.386748 1489788 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E1225 13:49:18.386767 1489788 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-058636" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (139.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-058636 -n newest-cni-058636
E1225 13:49:21.462120 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:49:21.467474 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:49:21.477735 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:49:21.498042 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:49:21.538345 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-058636 -n newest-cni-058636: exit status 3 (3.196009982s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:49:21.582898 1489862 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E1225 13:49:21.582930 1489862 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-058636 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1225 13:49:21.619493 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:49:21.780258 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:49:22.101083 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:49:22.742099 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:49:24.022636 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:49:26.583779 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-058636 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.167485044s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p newest-cni-058636 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-058636 -n newest-cni-058636
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-058636 -n newest-cni-058636: exit status 3 (3.048819878s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 13:49:30.798921 1489920 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E1225 13:49:30.798943 1489920 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-058636" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.41s)

                                                
                                    

Test pass (240/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.26
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 4.24
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.2/json-events 3.93
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.15
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
26 TestBinaryMirror 0.61
27 TestOffline 95.5
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
32 TestAddons/Setup 155.1
34 TestAddons/parallel/Registry 16.89
36 TestAddons/parallel/InspektorGadget 10.9
37 TestAddons/parallel/MetricsServer 7.08
38 TestAddons/parallel/HelmTiller 15.22
40 TestAddons/parallel/CSI 95.46
41 TestAddons/parallel/Headlamp 13.69
42 TestAddons/parallel/CloudSpanner 5.69
43 TestAddons/parallel/LocalPath 55.19
44 TestAddons/parallel/NvidiaDevicePlugin 5.68
45 TestAddons/parallel/Yakd 6.01
48 TestAddons/serial/GCPAuth/Namespaces 0.13
50 TestCertOptions 56.79
51 TestCertExpiration 352.6
53 TestForceSystemdFlag 52.19
54 TestForceSystemdEnv 51.75
56 TestKVMDriverInstallOrUpdate 1.25
60 TestErrorSpam/setup 48.34
61 TestErrorSpam/start 0.41
62 TestErrorSpam/status 0.83
63 TestErrorSpam/pause 1.74
64 TestErrorSpam/unpause 1.87
65 TestErrorSpam/stop 2.29
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 101.32
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 37.49
72 TestFunctional/serial/KubeContext 0.05
73 TestFunctional/serial/KubectlGetPods 0.08
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.24
77 TestFunctional/serial/CacheCmd/cache/add_local 1.08
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
79 TestFunctional/serial/CacheCmd/cache/list 0.07
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.93
82 TestFunctional/serial/CacheCmd/cache/delete 0.13
83 TestFunctional/serial/MinikubeKubectlCmd 0.13
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
85 TestFunctional/serial/ExtraConfig 35.64
86 TestFunctional/serial/ComponentHealth 0.07
87 TestFunctional/serial/LogsCmd 1.6
88 TestFunctional/serial/LogsFileCmd 1.64
89 TestFunctional/serial/InvalidService 3.99
91 TestFunctional/parallel/ConfigCmd 0.45
92 TestFunctional/parallel/DashboardCmd 15.95
93 TestFunctional/parallel/DryRun 0.31
94 TestFunctional/parallel/InternationalLanguage 0.15
95 TestFunctional/parallel/StatusCmd 0.97
99 TestFunctional/parallel/ServiceCmdConnect 20.65
100 TestFunctional/parallel/AddonsCmd 0.17
101 TestFunctional/parallel/PersistentVolumeClaim 52.47
103 TestFunctional/parallel/SSHCmd 0.47
104 TestFunctional/parallel/CpCmd 1.58
105 TestFunctional/parallel/MySQL 27.42
106 TestFunctional/parallel/FileSync 0.27
107 TestFunctional/parallel/CertSync 1.83
111 TestFunctional/parallel/NodeLabels 0.09
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
115 TestFunctional/parallel/License 0.16
116 TestFunctional/parallel/ServiceCmd/DeployApp 12.2
126 TestFunctional/parallel/Version/short 0.07
127 TestFunctional/parallel/Version/components 0.93
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.41
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.39
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
132 TestFunctional/parallel/ImageCommands/ImageBuild 4.9
133 TestFunctional/parallel/ImageCommands/Setup 1.01
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.96
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.21
140 TestFunctional/parallel/ServiceCmd/List 0.47
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
143 TestFunctional/parallel/ServiceCmd/Format 0.39
144 TestFunctional/parallel/ServiceCmd/URL 0.43
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.39
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.63
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.25
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
150 TestFunctional/parallel/ProfileCmd/profile_list 0.37
151 TestFunctional/parallel/MountCmd/any-port 10.1
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.61
155 TestFunctional/delete_addon-resizer_images 0.07
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestIngressAddonLegacy/StartLegacyK8sCluster 78.25
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.96
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.6
168 TestJSONOutput/start/Command 65.77
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.71
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.66
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 7.11
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.23
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 100.58
200 TestMountStart/serial/StartWithMountFirst 27.51
201 TestMountStart/serial/VerifyMountFirst 0.43
202 TestMountStart/serial/StartWithMountSecond 25.62
203 TestMountStart/serial/VerifyMountSecond 0.42
204 TestMountStart/serial/DeleteFirst 0.7
205 TestMountStart/serial/VerifyMountPostDelete 0.42
206 TestMountStart/serial/Stop 1.24
207 TestMountStart/serial/RestartStopped 26.43
208 TestMountStart/serial/VerifyMountPostStop 0.44
211 TestMultiNode/serial/FreshStart2Nodes 112.4
212 TestMultiNode/serial/DeployApp2Nodes 4.4
214 TestMultiNode/serial/AddNode 43.76
215 TestMultiNode/serial/MultiNodeLabels 0.07
216 TestMultiNode/serial/ProfileList 0.24
217 TestMultiNode/serial/CopyFile 8.18
218 TestMultiNode/serial/StopNode 3.04
219 TestMultiNode/serial/StartAfterStop 29.51
221 TestMultiNode/serial/DeleteNode 1.66
223 TestMultiNode/serial/RestartMultiNode 448.08
224 TestMultiNode/serial/ValidateNameConflict 48.74
231 TestScheduledStopUnix 124.27
237 TestKubernetesUpgrade 180.65
240 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
241 TestNoKubernetes/serial/StartWithK8s 106.03
250 TestPause/serial/Start 138.31
258 TestNetworkPlugins/group/false 5
262 TestNoKubernetes/serial/StartWithStopK8s 11.36
263 TestNoKubernetes/serial/Start 51.31
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
265 TestNoKubernetes/serial/ProfileList 2.63
266 TestNoKubernetes/serial/Stop 1.22
267 TestNoKubernetes/serial/StartNoArgs 70.26
268 TestPause/serial/SecondStartNoReconfiguration 87.82
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
270 TestStoppedBinaryUpgrade/Setup 0.37
272 TestPause/serial/Pause 0.91
273 TestPause/serial/VerifyStatus 0.36
274 TestPause/serial/Unpause 0.85
275 TestPause/serial/PauseAgain 1.07
276 TestPause/serial/DeletePaused 1.12
277 TestPause/serial/VerifyDeletedResources 0.42
279 TestStartStop/group/old-k8s-version/serial/FirstStart 139.15
281 TestStartStop/group/no-preload/serial/FirstStart 90.98
282 TestStartStop/group/old-k8s-version/serial/DeployApp 8.43
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.31
285 TestStartStop/group/no-preload/serial/DeployApp 9.3
286 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
289 TestStartStop/group/embed-certs/serial/FirstStart 128.78
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.44
292 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 124.12
294 TestStartStop/group/old-k8s-version/serial/SecondStart 426.98
296 TestStartStop/group/embed-certs/serial/DeployApp 9.3
297 TestStartStop/group/no-preload/serial/SecondStart 552.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.21
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.29
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.23
304 TestStartStop/group/embed-certs/serial/SecondStart 401.77
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 706.13
315 TestStartStop/group/newest-cni/serial/FirstStart 59.25
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.49
320 TestNetworkPlugins/group/auto/Start 103.42
322 TestStartStop/group/newest-cni/serial/SecondStart 414.17
323 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
324 TestStartStop/group/embed-certs/serial/Pause 2.91
325 TestNetworkPlugins/group/auto/KubeletFlags 0.26
326 TestNetworkPlugins/group/auto/NetCatPod 12.25
327 TestNetworkPlugins/group/kindnet/Start 342.69
328 TestNetworkPlugins/group/auto/DNS 0.18
329 TestNetworkPlugins/group/auto/Localhost 0.15
330 TestNetworkPlugins/group/auto/HairPin 0.16
331 TestNetworkPlugins/group/calico/Start 367.51
332 TestNetworkPlugins/group/custom-flannel/Start 364.78
333 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
334 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
335 TestNetworkPlugins/group/kindnet/NetCatPod 12.25
336 TestNetworkPlugins/group/kindnet/DNS 0.3
337 TestNetworkPlugins/group/kindnet/Localhost 0.24
338 TestNetworkPlugins/group/kindnet/HairPin 0.23
339 TestNetworkPlugins/group/enable-default-cni/Start 105.43
340 TestNetworkPlugins/group/calico/ControllerPod 6.01
341 TestNetworkPlugins/group/calico/KubeletFlags 0.27
342 TestNetworkPlugins/group/calico/NetCatPod 12.26
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
346 TestStartStop/group/newest-cni/serial/Pause 3.16
347 TestNetworkPlugins/group/flannel/Start 90.7
348 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
349 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.34
350 TestNetworkPlugins/group/calico/DNS 0.26
351 TestNetworkPlugins/group/calico/Localhost 0.21
352 TestNetworkPlugins/group/calico/HairPin 0.22
353 TestNetworkPlugins/group/custom-flannel/DNS 0.29
354 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
355 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
356 TestNetworkPlugins/group/bridge/Start 116.93
357 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
358 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.3
359 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
360 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
361 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
362 TestNetworkPlugins/group/flannel/ControllerPod 6.01
363 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
364 TestNetworkPlugins/group/flannel/NetCatPod 12.31
365 TestNetworkPlugins/group/flannel/DNS 0.18
366 TestNetworkPlugins/group/flannel/Localhost 0.16
367 TestNetworkPlugins/group/flannel/HairPin 0.14
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
369 TestNetworkPlugins/group/bridge/NetCatPod 11.25
370 TestNetworkPlugins/group/bridge/DNS 0.18
371 TestNetworkPlugins/group/bridge/Localhost 0.14
372 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (7.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-611991 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-611991 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.261884166s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-611991
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-611991: exit status 85 (81.156775ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-611991 | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC |          |
	|         | -p download-only-611991        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 12:16:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 12:16:15.500350 1449809 out.go:296] Setting OutFile to fd 1 ...
	I1225 12:16:15.500653 1449809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:16:15.500663 1449809 out.go:309] Setting ErrFile to fd 2...
	I1225 12:16:15.500668 1449809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:16:15.500839 1449809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	W1225 12:16:15.501012 1449809 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17847-1442600/.minikube/config/config.json: open /home/jenkins/minikube-integration/17847-1442600/.minikube/config/config.json: no such file or directory
	I1225 12:16:15.501678 1449809 out.go:303] Setting JSON to true
	I1225 12:16:15.502770 1449809 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":154729,"bootTime":1703351847,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 12:16:15.502839 1449809 start.go:138] virtualization: kvm guest
	I1225 12:16:15.505520 1449809 out.go:97] [download-only-611991] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 12:16:15.507009 1449809 out.go:169] MINIKUBE_LOCATION=17847
	W1225 12:16:15.505670 1449809 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball: no such file or directory
	I1225 12:16:15.505781 1449809 notify.go:220] Checking for updates...
	I1225 12:16:15.509730 1449809 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 12:16:15.511298 1449809 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:16:15.512829 1449809 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:16:15.514329 1449809 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1225 12:16:15.517292 1449809 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1225 12:16:15.517654 1449809 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 12:16:15.555170 1449809 out.go:97] Using the kvm2 driver based on user configuration
	I1225 12:16:15.555229 1449809 start.go:298] selected driver: kvm2
	I1225 12:16:15.555238 1449809 start.go:902] validating driver "kvm2" against <nil>
	I1225 12:16:15.555624 1449809 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 12:16:15.555708 1449809 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17847-1442600/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1225 12:16:15.572906 1449809 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1225 12:16:15.573007 1449809 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1225 12:16:15.573535 1449809 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1225 12:16:15.573694 1449809 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1225 12:16:15.573762 1449809 cni.go:84] Creating CNI manager for ""
	I1225 12:16:15.573781 1449809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1225 12:16:15.573793 1449809 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1225 12:16:15.573803 1449809 start_flags.go:323] config:
	{Name:download-only-611991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-611991 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:16:15.574017 1449809 iso.go:125] acquiring lock: {Name:mkcc1ebba21e33209f1c0c76f419a7ab9569fcea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 12:16:15.576155 1449809 out.go:97] Downloading VM boot image ...
	I1225 12:16:15.576205 1449809 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I1225 12:16:17.870904 1449809 out.go:97] Starting control plane node download-only-611991 in cluster download-only-611991
	I1225 12:16:17.870929 1449809 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1225 12:16:17.895367 1449809 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1225 12:16:17.895411 1449809 cache.go:56] Caching tarball of preloaded images
	I1225 12:16:17.895569 1449809 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1225 12:16:17.897629 1449809 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1225 12:16:17.897663 1449809 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1225 12:16:17.921678 1449809 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17847-1442600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-611991"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-611991 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-611991 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.241237574s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-611991
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-611991: exit status 85 (80.242196ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-611991 | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC |          |
	|         | -p download-only-611991        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-611991 | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC |          |
	|         | -p download-only-611991        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 12:16:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 12:16:22.846299 1449866 out.go:296] Setting OutFile to fd 1 ...
	I1225 12:16:22.846601 1449866 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:16:22.846613 1449866 out.go:309] Setting ErrFile to fd 2...
	I1225 12:16:22.846618 1449866 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:16:22.846804 1449866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	W1225 12:16:22.846919 1449866 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17847-1442600/.minikube/config/config.json: open /home/jenkins/minikube-integration/17847-1442600/.minikube/config/config.json: no such file or directory
	I1225 12:16:22.847418 1449866 out.go:303] Setting JSON to true
	I1225 12:16:22.848321 1449866 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":154736,"bootTime":1703351847,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 12:16:22.848387 1449866 start.go:138] virtualization: kvm guest
	I1225 12:16:22.850673 1449866 out.go:97] [download-only-611991] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 12:16:22.852372 1449866 out.go:169] MINIKUBE_LOCATION=17847
	I1225 12:16:22.850926 1449866 notify.go:220] Checking for updates...
	I1225 12:16:22.855759 1449866 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 12:16:22.857720 1449866 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:16:22.859294 1449866 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:16:22.860706 1449866 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-611991"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (3.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-611991 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-611991 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.928881366s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (3.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-611991
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-611991: exit status 85 (83.092859ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-611991 | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC |          |
	|         | -p download-only-611991           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-611991 | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC |          |
	|         | -p download-only-611991           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-611991 | jenkins | v1.32.0 | 25 Dec 23 12:16 UTC |          |
	|         | -p download-only-611991           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/25 12:16:27
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 12:16:27.168103 1449911 out.go:296] Setting OutFile to fd 1 ...
	I1225 12:16:27.168291 1449911 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:16:27.168303 1449911 out.go:309] Setting ErrFile to fd 2...
	I1225 12:16:27.168308 1449911 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:16:27.168509 1449911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	W1225 12:16:27.168633 1449911 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17847-1442600/.minikube/config/config.json: open /home/jenkins/minikube-integration/17847-1442600/.minikube/config/config.json: no such file or directory
	I1225 12:16:27.169104 1449911 out.go:303] Setting JSON to true
	I1225 12:16:27.170054 1449911 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":154740,"bootTime":1703351847,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 12:16:27.170134 1449911 start.go:138] virtualization: kvm guest
	I1225 12:16:27.172418 1449911 out.go:97] [download-only-611991] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 12:16:27.174204 1449911 out.go:169] MINIKUBE_LOCATION=17847
	I1225 12:16:27.172599 1449911 notify.go:220] Checking for updates...
	I1225 12:16:27.177002 1449911 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 12:16:27.178575 1449911 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:16:27.180015 1449911 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:16:27.181666 1449911 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-611991"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-611991
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-944204 --alsologtostderr --binary-mirror http://127.0.0.1:35281 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-944204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-944204
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (95.5s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-904416 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-904416 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m34.307825042s)
helpers_test.go:175: Cleaning up "offline-crio-904416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-904416
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-904416: (1.187141091s)
--- PASS: TestOffline (95.50s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-294911
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-294911: exit status 85 (69.175423ms)

                                                
                                                
-- stdout --
	* Profile "addons-294911" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-294911"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-294911
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-294911: exit status 85 (69.1394ms)

                                                
                                                
-- stdout --
	* Profile "addons-294911" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-294911"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (155.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-294911 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-294911 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m35.099454083s)
--- PASS: TestAddons/Setup (155.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 36.050303ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-4qz4b" [7610bd4f-9226-4f2c-8284-ec69f5f1c21f] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.007034186s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dpb6q" [11af4342-d52d-4596-bd37-0c9cefafb061] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008236643s
addons_test.go:340: (dbg) Run:  kubectl --context addons-294911 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-294911 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-294911 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.919707516s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 ip
2023/12/25 12:19:23 [DEBUG] GET http://192.168.39.148:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.9s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pjnkj" [f1e75a2d-53e2-43eb-b372-5c0d87841d5f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005607252s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-294911
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-294911: (5.895573358s)
--- PASS: TestAddons/parallel/InspektorGadget (10.90s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.08s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 45.206015ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-6dhqs" [8cfe97c5-d071-4349-bf5d-d30177e71d22] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005898331s
addons_test.go:415: (dbg) Run:  kubectl --context addons-294911 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.08s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.22s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.969514ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-p7zsn" [03ef0447-4ed5-4a60-808d-639937566c1d] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.014539474s
addons_test.go:473: (dbg) Run:  kubectl --context addons-294911 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-294911 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.45592073s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.22s)

                                                
                                    
x
+
TestAddons/parallel/CSI (95.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 36.116301ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-294911 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-294911 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f911acc4-4fa0-49f5-b2f5-571c84e8addf] Pending
helpers_test.go:344: "task-pv-pod" [f911acc4-4fa0-49f5-b2f5-571c84e8addf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f911acc4-4fa0-49f5-b2f5-571c84e8addf] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004360898s
addons_test.go:584: (dbg) Run:  kubectl --context addons-294911 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-294911 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-294911 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-294911 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-294911 delete pod task-pv-pod: (1.313961972s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-294911 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-294911 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-294911 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [507069e4-37b6-4d36-911e-becc9abea8bd] Pending
helpers_test.go:344: "task-pv-pod-restore" [507069e4-37b6-4d36-911e-becc9abea8bd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [507069e4-37b6-4d36-911e-becc9abea8bd] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005064054s
addons_test.go:626: (dbg) Run:  kubectl --context addons-294911 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-294911 delete pod task-pv-pod-restore: (1.266124808s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-294911 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-294911 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-294911 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.063416269s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (95.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-294911 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-294911 --alsologtostderr -v=1: (1.687136024s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-8x7wm" [19fe76e9-2ad4-4e1f-9955-0c9f045d375a] Pending
helpers_test.go:344: "headlamp-777fd4b855-8x7wm" [19fe76e9-2ad4-4e1f-9955-0c9f045d375a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-8x7wm" [19fe76e9-2ad4-4e1f-9955-0c9f045d375a] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.00531068s
--- PASS: TestAddons/parallel/Headlamp (13.69s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-zf296" [cb1eae55-cf52-482b-8f66-9ec043a3e680] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003849038s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-294911
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-294911 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-294911 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-294911 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d237840c-2bac-446c-8c09-af207e7b9721] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d237840c-2bac-446c-8c09-af207e7b9721] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d237840c-2bac-446c-8c09-af207e7b9721] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005237281s
addons_test.go:891: (dbg) Run:  kubectl --context addons-294911 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 ssh "cat /opt/local-path-provisioner/pvc-d0b87c27-b3de-491f-9f3e-a2803f1d0726_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-294911 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-294911 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-294911 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-294911 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.172142599s)
--- PASS: TestAddons/parallel/LocalPath (55.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6ssjm" [10e8dcb3-74eb-4487-bdf0-a6f69d444a40] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.010368981s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-294911
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.68s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-qqs7d" [a4f764ed-ad21-4e92-ba64-e1571de7e54e] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005260277s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-294911 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-294911 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (56.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-553787 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-553787 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (55.152397939s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-553787 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-553787 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-553787 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-553787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-553787
E1225 13:13:56.706112 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-553787: (1.074763815s)
--- PASS: TestCertOptions (56.79s)

                                                
                                    
x
+
TestCertExpiration (352.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-021022 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-021022 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m12.027487976s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-021022 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-021022 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m39.692114693s)
helpers_test.go:175: Cleaning up "cert-expiration-021022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-021022
--- PASS: TestCertExpiration (352.60s)

                                                
                                    
x
+
TestForceSystemdFlag (52.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-011162 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-011162 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (50.883092138s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-011162 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-011162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-011162
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-011162: (1.074057971s)
--- PASS: TestForceSystemdFlag (52.19s)

                                                
                                    
x
+
TestForceSystemdEnv (51.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-947648 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-947648 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (50.668257361s)
helpers_test.go:175: Cleaning up "force-systemd-env-947648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-947648
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-947648: (1.085988451s)
--- PASS: TestForceSystemdEnv (51.75s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.25s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.25s)

                                                
                                    
x
+
TestErrorSpam/setup (48.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-426751 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-426751 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-426751 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-426751 --driver=kvm2  --container-runtime=crio: (48.34025221s)
--- PASS: TestErrorSpam/setup (48.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 pause
--- PASS: TestErrorSpam/pause (1.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (2.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 stop: (2.103283794s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426751 --log_dir /tmp/nospam-426751 stop
--- PASS: TestErrorSpam/stop (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17847-1442600/.minikube/files/etc/test/nested/copy/1449797/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (101.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-467117 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-467117 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m41.323758585s)
--- PASS: TestFunctional/serial/StartWithProxy (101.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.49s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-467117 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-467117 --alsologtostderr -v=8: (37.493022842s)
functional_test.go:659: soft start took 37.493848018s for "functional-467117" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.49s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-467117 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 cache add registry.k8s.io/pause:3.1: (1.035341968s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 cache add registry.k8s.io/pause:3.3: (1.125679968s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 cache add registry.k8s.io/pause:latest: (1.083178825s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-467117 /tmp/TestFunctionalserialCacheCmdcacheadd_local839134062/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 cache add minikube-local-cache-test:functional-467117
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 cache delete minikube-local-cache-test:functional-467117
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-467117
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (251.50853ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 cache reload: (1.091691516s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 kubectl -- --context functional-467117 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-467117 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-467117 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-467117 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.638154203s)
functional_test.go:757: restart took 35.638338378s for "functional-467117" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-467117 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 logs: (1.596378722s)
--- PASS: TestFunctional/serial/LogsCmd (1.60s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 logs --file /tmp/TestFunctionalserialLogsFileCmd1507477162/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 logs --file /tmp/TestFunctionalserialLogsFileCmd1507477162/001/logs.txt: (1.641922019s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-467117 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-467117
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-467117: exit status 115 (320.590196ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.76:31919 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-467117 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 config get cpus: exit status 14 (77.160398ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 config get cpus: exit status 14 (61.235057ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-467117 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-467117 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1458199: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-467117 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-467117 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (153.564659ms)

                                                
                                                
-- stdout --
	* [functional-467117] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 12:29:30.189751 1457824 out.go:296] Setting OutFile to fd 1 ...
	I1225 12:29:30.189914 1457824 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:29:30.189927 1457824 out.go:309] Setting ErrFile to fd 2...
	I1225 12:29:30.189932 1457824 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:29:30.190195 1457824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 12:29:30.190847 1457824 out.go:303] Setting JSON to false
	I1225 12:29:30.191857 1457824 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":155523,"bootTime":1703351847,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 12:29:30.191939 1457824 start.go:138] virtualization: kvm guest
	I1225 12:29:30.194327 1457824 out.go:177] * [functional-467117] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 12:29:30.195792 1457824 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 12:29:30.196968 1457824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 12:29:30.195822 1457824 notify.go:220] Checking for updates...
	I1225 12:29:30.199218 1457824 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:29:30.200398 1457824 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:29:30.201710 1457824 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 12:29:30.202894 1457824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 12:29:30.204689 1457824 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:29:30.205136 1457824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:29:30.205194 1457824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:29:30.221478 1457824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42763
	I1225 12:29:30.221885 1457824 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:29:30.222458 1457824 main.go:141] libmachine: Using API Version  1
	I1225 12:29:30.222488 1457824 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:29:30.222812 1457824 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:29:30.222996 1457824 main.go:141] libmachine: (functional-467117) Calling .DriverName
	I1225 12:29:30.223296 1457824 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 12:29:30.223607 1457824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:29:30.223656 1457824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:29:30.239734 1457824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46119
	I1225 12:29:30.240213 1457824 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:29:30.240670 1457824 main.go:141] libmachine: Using API Version  1
	I1225 12:29:30.240697 1457824 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:29:30.241026 1457824 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:29:30.241247 1457824 main.go:141] libmachine: (functional-467117) Calling .DriverName
	I1225 12:29:30.275992 1457824 out.go:177] * Using the kvm2 driver based on existing profile
	I1225 12:29:30.277209 1457824 start.go:298] selected driver: kvm2
	I1225 12:29:30.277221 1457824 start.go:902] validating driver "kvm2" against &{Name:functional-467117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-467117 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.76 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:29:30.277319 1457824 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 12:29:30.279333 1457824 out.go:177] 
	W1225 12:29:30.280534 1457824 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1225 12:29:30.281758 1457824 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-467117 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-467117 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-467117 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (153.976692ms)

                                                
                                                
-- stdout --
	* [functional-467117] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 12:29:30.498071 1457880 out.go:296] Setting OutFile to fd 1 ...
	I1225 12:29:30.498198 1457880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:29:30.498206 1457880 out.go:309] Setting ErrFile to fd 2...
	I1225 12:29:30.498211 1457880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:29:30.498576 1457880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 12:29:30.499144 1457880 out.go:303] Setting JSON to false
	I1225 12:29:30.500052 1457880 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":155524,"bootTime":1703351847,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 12:29:30.500117 1457880 start.go:138] virtualization: kvm guest
	I1225 12:29:30.502161 1457880 out.go:177] * [functional-467117] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1225 12:29:30.503489 1457880 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 12:29:30.504709 1457880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 12:29:30.503511 1457880 notify.go:220] Checking for updates...
	I1225 12:29:30.506929 1457880 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 12:29:30.508129 1457880 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 12:29:30.509328 1457880 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 12:29:30.510666 1457880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 12:29:30.512342 1457880 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:29:30.512789 1457880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:29:30.512838 1457880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:29:30.528126 1457880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I1225 12:29:30.528581 1457880 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:29:30.529124 1457880 main.go:141] libmachine: Using API Version  1
	I1225 12:29:30.529151 1457880 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:29:30.529496 1457880 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:29:30.529688 1457880 main.go:141] libmachine: (functional-467117) Calling .DriverName
	I1225 12:29:30.529933 1457880 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 12:29:30.530230 1457880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:29:30.530266 1457880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:29:30.546541 1457880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I1225 12:29:30.546981 1457880 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:29:30.547590 1457880 main.go:141] libmachine: Using API Version  1
	I1225 12:29:30.547633 1457880 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:29:30.548030 1457880 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:29:30.548278 1457880 main.go:141] libmachine: (functional-467117) Calling .DriverName
	I1225 12:29:30.582451 1457880 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1225 12:29:30.583801 1457880 start.go:298] selected driver: kvm2
	I1225 12:29:30.583813 1457880 start.go:902] validating driver "kvm2" against &{Name:functional-467117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-467117 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.76 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1225 12:29:30.583948 1457880 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 12:29:30.586013 1457880 out.go:177] 
	W1225 12:29:30.587200 1457880 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1225 12:29:30.588409 1457880 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (20.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-467117 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-467117 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-bhjv8" [bb68aed9-9691-43e2-bf93-21ababaff793] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E1225 12:29:12.469384 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 12:29:17.590120 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
helpers_test.go:344: "hello-node-connect-55497b8b78-bhjv8" [bb68aed9-9691-43e2-bf93-21ababaff793] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.003684271s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.76:30329
functional_test.go:1674: http://192.168.39.76:30329: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-bhjv8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.76:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.76:30329
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (20.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0e871601-a55c-4ae7-9c3a-3c9ba28e2f5c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.010679086s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-467117 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-467117 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-467117 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-467117 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-467117 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6c57f3eb-16a1-46ee-a4ca-45774047db99] Pending
helpers_test.go:344: "sp-pod" [6c57f3eb-16a1-46ee-a4ca-45774047db99] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1225 12:29:07.348143 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 12:29:07.354267 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 12:29:07.364554 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 12:29:07.384935 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 12:29:07.425280 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [6c57f3eb-16a1-46ee-a4ca-45774047db99] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.020671874s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-467117 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-467117 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-467117 delete -f testdata/storage-provisioner/pod.yaml: (1.124968499s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-467117 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4830fa50-a7f4-4d34-bb83-5a45eea39833] Pending
helpers_test.go:344: "sp-pod" [4830fa50-a7f4-4d34-bb83-5a45eea39833] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4830fa50-a7f4-4d34-bb83-5a45eea39833] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005125524s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-467117 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh -n functional-467117 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 cp functional-467117:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3529702572/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh -n functional-467117 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh -n functional-467117 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-467117 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-cn7h4" [890ae24a-65cc-4851-a7ac-c368dc9adb2c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-cn7h4" [890ae24a-65cc-4851-a7ac-c368dc9adb2c] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.060282008s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-467117 exec mysql-859648c796-cn7h4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-467117 exec mysql-859648c796-cn7h4 -- mysql -ppassword -e "show databases;": exit status 1 (400.297549ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-467117 exec mysql-859648c796-cn7h4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-467117 exec mysql-859648c796-cn7h4 -- mysql -ppassword -e "show databases;": exit status 1 (191.755534ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-467117 exec mysql-859648c796-cn7h4 -- mysql -ppassword -e "show databases;"
E1225 12:29:27.831128 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MySQL (27.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/1449797/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "sudo cat /etc/test/nested/copy/1449797/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/1449797.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "sudo cat /etc/ssl/certs/1449797.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/1449797.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "sudo cat /usr/share/ca-certificates/1449797.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/14497972.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "sudo cat /etc/ssl/certs/14497972.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/14497972.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "sudo cat /usr/share/ca-certificates/14497972.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-467117 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "sudo systemctl is-active docker": exit status 1 (230.148424ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "sudo systemctl is-active containerd": exit status 1 (259.986703ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-467117 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-467117 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-pmn47" [d1dc869d-d7c5-4240-9865-fa602f0b6b8c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-pmn47" [d1dc869d-d7c5-4240-9865-fa602f0b6b8c] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004826555s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-467117 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-467117
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-467117
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-467117 image ls --format short --alsologtostderr:
I1225 12:29:33.169112 1458220 out.go:296] Setting OutFile to fd 1 ...
I1225 12:29:33.169295 1458220 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1225 12:29:33.169308 1458220 out.go:309] Setting ErrFile to fd 2...
I1225 12:29:33.169312 1458220 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1225 12:29:33.169482 1458220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
I1225 12:29:33.170142 1458220 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1225 12:29:33.170253 1458220 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1225 12:29:33.170671 1458220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1225 12:29:33.170735 1458220 main.go:141] libmachine: Launching plugin server for driver kvm2
I1225 12:29:33.186060 1458220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34799
I1225 12:29:33.186535 1458220 main.go:141] libmachine: () Calling .GetVersion
I1225 12:29:33.187214 1458220 main.go:141] libmachine: Using API Version  1
I1225 12:29:33.187248 1458220 main.go:141] libmachine: () Calling .SetConfigRaw
I1225 12:29:33.187573 1458220 main.go:141] libmachine: () Calling .GetMachineName
I1225 12:29:33.187770 1458220 main.go:141] libmachine: (functional-467117) Calling .GetState
I1225 12:29:33.189884 1458220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1225 12:29:33.189938 1458220 main.go:141] libmachine: Launching plugin server for driver kvm2
I1225 12:29:33.205612 1458220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38301
I1225 12:29:33.206091 1458220 main.go:141] libmachine: () Calling .GetVersion
I1225 12:29:33.206657 1458220 main.go:141] libmachine: Using API Version  1
I1225 12:29:33.206692 1458220 main.go:141] libmachine: () Calling .SetConfigRaw
I1225 12:29:33.207087 1458220 main.go:141] libmachine: () Calling .GetMachineName
I1225 12:29:33.207324 1458220 main.go:141] libmachine: (functional-467117) Calling .DriverName
I1225 12:29:33.207542 1458220 ssh_runner.go:195] Run: systemctl --version
I1225 12:29:33.207568 1458220 main.go:141] libmachine: (functional-467117) Calling .GetSSHHostname
I1225 12:29:33.210735 1458220 main.go:141] libmachine: (functional-467117) DBG | domain functional-467117 has defined MAC address 52:54:00:49:22:1d in network mk-functional-467117
I1225 12:29:33.211183 1458220 main.go:141] libmachine: (functional-467117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:22:1d", ip: ""} in network mk-functional-467117: {Iface:virbr1 ExpiryTime:2023-12-25 13:26:04 +0000 UTC Type:0 Mac:52:54:00:49:22:1d Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-467117 Clientid:01:52:54:00:49:22:1d}
I1225 12:29:33.211212 1458220 main.go:141] libmachine: (functional-467117) DBG | domain functional-467117 has defined IP address 192.168.39.76 and MAC address 52:54:00:49:22:1d in network mk-functional-467117
I1225 12:29:33.211360 1458220 main.go:141] libmachine: (functional-467117) Calling .GetSSHPort
I1225 12:29:33.211560 1458220 main.go:141] libmachine: (functional-467117) Calling .GetSSHKeyPath
I1225 12:29:33.211719 1458220 main.go:141] libmachine: (functional-467117) Calling .GetSSHUsername
I1225 12:29:33.211891 1458220 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/functional-467117/id_rsa Username:docker}
I1225 12:29:33.372643 1458220 ssh_runner.go:195] Run: sudo crictl images --output json
I1225 12:29:33.503670 1458220 main.go:141] libmachine: Making call to close driver server
I1225 12:29:33.503685 1458220 main.go:141] libmachine: (functional-467117) Calling .Close
I1225 12:29:33.504003 1458220 main.go:141] libmachine: Successfully made call to close driver server
I1225 12:29:33.504022 1458220 main.go:141] libmachine: Making call to close connection to plugin binary
I1225 12:29:33.504037 1458220 main.go:141] libmachine: Making call to close driver server
I1225 12:29:33.504046 1458220 main.go:141] libmachine: (functional-467117) Calling .Close
I1225 12:29:33.504066 1458220 main.go:141] libmachine: (functional-467117) DBG | Closing plugin on server side
I1225 12:29:33.504272 1458220 main.go:141] libmachine: Successfully made call to close driver server
I1225 12:29:33.504290 1458220 main.go:141] libmachine: Making call to close connection to plugin binary
I1225 12:29:33.504303 1458220 main.go:141] libmachine: (functional-467117) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-467117 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | latest             | d453dd892d935 | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/my-image                      | functional-467117  | 7c6fb88aa3d03 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| gcr.io/google-containers/addon-resizer  | functional-467117  | ffd4cfbbe753e | 34.1MB |
| localhost/minikube-local-cache-test     | functional-467117  | 7254f98739b54 | 3.35kB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-467117 image ls --format table --alsologtostderr:
I1225 12:29:39.189582 1458454 out.go:296] Setting OutFile to fd 1 ...
I1225 12:29:39.189707 1458454 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1225 12:29:39.189715 1458454 out.go:309] Setting ErrFile to fd 2...
I1225 12:29:39.189719 1458454 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1225 12:29:39.189919 1458454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
I1225 12:29:39.190591 1458454 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1225 12:29:39.190709 1458454 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1225 12:29:39.191072 1458454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1225 12:29:39.191138 1458454 main.go:141] libmachine: Launching plugin server for driver kvm2
I1225 12:29:39.207129 1458454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41625
I1225 12:29:39.207609 1458454 main.go:141] libmachine: () Calling .GetVersion
I1225 12:29:39.208288 1458454 main.go:141] libmachine: Using API Version  1
I1225 12:29:39.208321 1458454 main.go:141] libmachine: () Calling .SetConfigRaw
I1225 12:29:39.208769 1458454 main.go:141] libmachine: () Calling .GetMachineName
I1225 12:29:39.208994 1458454 main.go:141] libmachine: (functional-467117) Calling .GetState
I1225 12:29:39.211181 1458454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1225 12:29:39.211238 1458454 main.go:141] libmachine: Launching plugin server for driver kvm2
I1225 12:29:39.228428 1458454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36319
I1225 12:29:39.228900 1458454 main.go:141] libmachine: () Calling .GetVersion
I1225 12:29:39.229388 1458454 main.go:141] libmachine: Using API Version  1
I1225 12:29:39.229415 1458454 main.go:141] libmachine: () Calling .SetConfigRaw
I1225 12:29:39.229755 1458454 main.go:141] libmachine: () Calling .GetMachineName
I1225 12:29:39.229960 1458454 main.go:141] libmachine: (functional-467117) Calling .DriverName
I1225 12:29:39.230165 1458454 ssh_runner.go:195] Run: systemctl --version
I1225 12:29:39.230204 1458454 main.go:141] libmachine: (functional-467117) Calling .GetSSHHostname
I1225 12:29:39.233692 1458454 main.go:141] libmachine: (functional-467117) DBG | domain functional-467117 has defined MAC address 52:54:00:49:22:1d in network mk-functional-467117
I1225 12:29:39.234194 1458454 main.go:141] libmachine: (functional-467117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:22:1d", ip: ""} in network mk-functional-467117: {Iface:virbr1 ExpiryTime:2023-12-25 13:26:04 +0000 UTC Type:0 Mac:52:54:00:49:22:1d Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-467117 Clientid:01:52:54:00:49:22:1d}
I1225 12:29:39.234236 1458454 main.go:141] libmachine: (functional-467117) DBG | domain functional-467117 has defined IP address 192.168.39.76 and MAC address 52:54:00:49:22:1d in network mk-functional-467117
I1225 12:29:39.234389 1458454 main.go:141] libmachine: (functional-467117) Calling .GetSSHPort
I1225 12:29:39.234617 1458454 main.go:141] libmachine: (functional-467117) Calling .GetSSHKeyPath
I1225 12:29:39.234771 1458454 main.go:141] libmachine: (functional-467117) Calling .GetSSHUsername
I1225 12:29:39.234949 1458454 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/functional-467117/id_rsa Username:docker}
I1225 12:29:39.358128 1458454 ssh_runner.go:195] Run: sudo crictl images --output json
I1225 12:29:39.425297 1458454 main.go:141] libmachine: Making call to close driver server
I1225 12:29:39.425321 1458454 main.go:141] libmachine: (functional-467117) Calling .Close
I1225 12:29:39.425630 1458454 main.go:141] libmachine: (functional-467117) DBG | Closing plugin on server side
I1225 12:29:39.425681 1458454 main.go:141] libmachine: Successfully made call to close driver server
I1225 12:29:39.425691 1458454 main.go:141] libmachine: Making call to close connection to plugin binary
I1225 12:29:39.425701 1458454 main.go:141] libmachine: Making call to close driver server
I1225 12:29:39.425710 1458454 main.go:141] libmachine: (functional-467117) Calling .Close
I1225 12:29:39.425944 1458454 main.go:141] libmachine: (functional-467117) DBG | Closing plugin on server side
I1225 12:29:39.425979 1458454 main.go:141] libmachine: Successfully made call to close driver server
I1225 12:29:39.426002 1458454 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-467117 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab
5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"7c6fb88aa3d033472273fe9f90ab78d0d44ce861392bdde2f92d0283ba908df6","repoDigests":["localhost/my-image@sha256:79238fbb9109e9041e061a54523a29ed7e6c1cb947280abc479eae5696139ce5"],"repoTags":["localhost/my-image:functional-467117"],"size"
:"1468600"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac
50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"7704ff39f36ac23c6c2f3b04d9a1bff886b9e8f570061ef38e643eb5b8d644c1","repoDigests":["docker.io/library/350a60d8225f9e127644c5452ca28420fb5db2045984fcd11f5ac5a87589c0b3-tmp@sha256:9b00466056251d5994a83abedb8e81e7e46829b2bf86b36df502c853dc0a75f9"],"repoTags":[],"size":"1466018"},{"id":"7254f
98739b5429f9bf8c9134711c54a3f924df03c1d6f57a93661baa0ecc970","repoDigests":["localhost/minikube-local-cache-test@sha256:4a98e534a468f1a5931fafed249e5f51001441419dbbd471a7a21a8f9eb70ef5"],"repoTags":["localhost/minikube-local-cache-test:functional-467117"],"size":"3345"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisione
r@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dc
bcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-467117"],"size":"34114467"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8
s.io/etcd:3.5.9-0"],"size":"295456551"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-467117 image ls --format json --alsologtostderr:
I1225 12:29:38.815313 1458390 out.go:296] Setting OutFile to fd 1 ...
I1225 12:29:38.815646 1458390 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1225 12:29:38.815667 1458390 out.go:309] Setting ErrFile to fd 2...
I1225 12:29:38.815674 1458390 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1225 12:29:38.816012 1458390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
I1225 12:29:38.816922 1458390 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1225 12:29:38.817107 1458390 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1225 12:29:38.817752 1458390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1225 12:29:38.817823 1458390 main.go:141] libmachine: Launching plugin server for driver kvm2
I1225 12:29:38.835014 1458390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37809
I1225 12:29:38.835524 1458390 main.go:141] libmachine: () Calling .GetVersion
I1225 12:29:38.836168 1458390 main.go:141] libmachine: Using API Version  1
I1225 12:29:38.836194 1458390 main.go:141] libmachine: () Calling .SetConfigRaw
I1225 12:29:38.836608 1458390 main.go:141] libmachine: () Calling .GetMachineName
I1225 12:29:38.836887 1458390 main.go:141] libmachine: (functional-467117) Calling .GetState
I1225 12:29:38.839145 1458390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1225 12:29:38.839200 1458390 main.go:141] libmachine: Launching plugin server for driver kvm2
I1225 12:29:38.856854 1458390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37697
I1225 12:29:38.857363 1458390 main.go:141] libmachine: () Calling .GetVersion
I1225 12:29:38.857871 1458390 main.go:141] libmachine: Using API Version  1
I1225 12:29:38.857895 1458390 main.go:141] libmachine: () Calling .SetConfigRaw
I1225 12:29:38.858345 1458390 main.go:141] libmachine: () Calling .GetMachineName
I1225 12:29:38.858579 1458390 main.go:141] libmachine: (functional-467117) Calling .DriverName
I1225 12:29:38.858878 1458390 ssh_runner.go:195] Run: systemctl --version
I1225 12:29:38.858913 1458390 main.go:141] libmachine: (functional-467117) Calling .GetSSHHostname
I1225 12:29:38.862072 1458390 main.go:141] libmachine: (functional-467117) DBG | domain functional-467117 has defined MAC address 52:54:00:49:22:1d in network mk-functional-467117
I1225 12:29:38.862598 1458390 main.go:141] libmachine: (functional-467117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:22:1d", ip: ""} in network mk-functional-467117: {Iface:virbr1 ExpiryTime:2023-12-25 13:26:04 +0000 UTC Type:0 Mac:52:54:00:49:22:1d Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-467117 Clientid:01:52:54:00:49:22:1d}
I1225 12:29:38.862634 1458390 main.go:141] libmachine: (functional-467117) DBG | domain functional-467117 has defined IP address 192.168.39.76 and MAC address 52:54:00:49:22:1d in network mk-functional-467117
I1225 12:29:38.862812 1458390 main.go:141] libmachine: (functional-467117) Calling .GetSSHPort
I1225 12:29:38.863005 1458390 main.go:141] libmachine: (functional-467117) Calling .GetSSHKeyPath
I1225 12:29:38.863190 1458390 main.go:141] libmachine: (functional-467117) Calling .GetSSHUsername
I1225 12:29:38.863358 1458390 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/functional-467117/id_rsa Username:docker}
I1225 12:29:39.048654 1458390 ssh_runner.go:195] Run: sudo crictl images --output json
I1225 12:29:39.122968 1458390 main.go:141] libmachine: Making call to close driver server
I1225 12:29:39.122982 1458390 main.go:141] libmachine: (functional-467117) Calling .Close
I1225 12:29:39.123320 1458390 main.go:141] libmachine: (functional-467117) DBG | Closing plugin on server side
I1225 12:29:39.123376 1458390 main.go:141] libmachine: Successfully made call to close driver server
I1225 12:29:39.123393 1458390 main.go:141] libmachine: Making call to close connection to plugin binary
I1225 12:29:39.123403 1458390 main.go:141] libmachine: Making call to close driver server
I1225 12:29:39.123414 1458390 main.go:141] libmachine: (functional-467117) Calling .Close
I1225 12:29:39.123695 1458390 main.go:141] libmachine: Successfully made call to close driver server
I1225 12:29:39.123712 1458390 main.go:141] libmachine: Making call to close connection to plugin binary
I1225 12:29:39.123811 1458390 main.go:141] libmachine: (functional-467117) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-467117 image ls --format yaml --alsologtostderr:
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-467117
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: 7254f98739b5429f9bf8c9134711c54a3f924df03c1d6f57a93661baa0ecc970
repoDigests:
- localhost/minikube-local-cache-test@sha256:4a98e534a468f1a5931fafed249e5f51001441419dbbd471a7a21a8f9eb70ef5
repoTags:
- localhost/minikube-local-cache-test:functional-467117
size: "3345"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-467117 image ls --format yaml --alsologtostderr:
I1225 12:29:33.586718 1458243 out.go:296] Setting OutFile to fd 1 ...
I1225 12:29:33.586893 1458243 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1225 12:29:33.586907 1458243 out.go:309] Setting ErrFile to fd 2...
I1225 12:29:33.586915 1458243 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1225 12:29:33.587241 1458243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
I1225 12:29:33.588153 1458243 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1225 12:29:33.588329 1458243 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1225 12:29:33.588917 1458243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1225 12:29:33.588992 1458243 main.go:141] libmachine: Launching plugin server for driver kvm2
I1225 12:29:33.604604 1458243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
I1225 12:29:33.605125 1458243 main.go:141] libmachine: () Calling .GetVersion
I1225 12:29:33.605792 1458243 main.go:141] libmachine: Using API Version  1
I1225 12:29:33.605816 1458243 main.go:141] libmachine: () Calling .SetConfigRaw
I1225 12:29:33.606199 1458243 main.go:141] libmachine: () Calling .GetMachineName
I1225 12:29:33.606425 1458243 main.go:141] libmachine: (functional-467117) Calling .GetState
I1225 12:29:33.608380 1458243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1225 12:29:33.608422 1458243 main.go:141] libmachine: Launching plugin server for driver kvm2
I1225 12:29:33.623612 1458243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41291
I1225 12:29:33.624082 1458243 main.go:141] libmachine: () Calling .GetVersion
I1225 12:29:33.624602 1458243 main.go:141] libmachine: Using API Version  1
I1225 12:29:33.624629 1458243 main.go:141] libmachine: () Calling .SetConfigRaw
I1225 12:29:33.625012 1458243 main.go:141] libmachine: () Calling .GetMachineName
I1225 12:29:33.625232 1458243 main.go:141] libmachine: (functional-467117) Calling .DriverName
I1225 12:29:33.625447 1458243 ssh_runner.go:195] Run: systemctl --version
I1225 12:29:33.625486 1458243 main.go:141] libmachine: (functional-467117) Calling .GetSSHHostname
I1225 12:29:33.628631 1458243 main.go:141] libmachine: (functional-467117) DBG | domain functional-467117 has defined MAC address 52:54:00:49:22:1d in network mk-functional-467117
I1225 12:29:33.629070 1458243 main.go:141] libmachine: (functional-467117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:22:1d", ip: ""} in network mk-functional-467117: {Iface:virbr1 ExpiryTime:2023-12-25 13:26:04 +0000 UTC Type:0 Mac:52:54:00:49:22:1d Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-467117 Clientid:01:52:54:00:49:22:1d}
I1225 12:29:33.629104 1458243 main.go:141] libmachine: (functional-467117) DBG | domain functional-467117 has defined IP address 192.168.39.76 and MAC address 52:54:00:49:22:1d in network mk-functional-467117
I1225 12:29:33.629223 1458243 main.go:141] libmachine: (functional-467117) Calling .GetSSHPort
I1225 12:29:33.629415 1458243 main.go:141] libmachine: (functional-467117) Calling .GetSSHKeyPath
I1225 12:29:33.629582 1458243 main.go:141] libmachine: (functional-467117) Calling .GetSSHUsername
I1225 12:29:33.629727 1458243 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/functional-467117/id_rsa Username:docker}
I1225 12:29:33.744297 1458243 ssh_runner.go:195] Run: sudo crictl images --output json
I1225 12:29:33.838482 1458243 main.go:141] libmachine: Making call to close driver server
I1225 12:29:33.838504 1458243 main.go:141] libmachine: (functional-467117) Calling .Close
I1225 12:29:33.838836 1458243 main.go:141] libmachine: Successfully made call to close driver server
I1225 12:29:33.838859 1458243 main.go:141] libmachine: Making call to close connection to plugin binary
I1225 12:29:33.838865 1458243 main.go:141] libmachine: (functional-467117) DBG | Closing plugin on server side
I1225 12:29:33.838880 1458243 main.go:141] libmachine: Making call to close driver server
I1225 12:29:33.838892 1458243 main.go:141] libmachine: (functional-467117) Calling .Close
I1225 12:29:33.839155 1458243 main.go:141] libmachine: Successfully made call to close driver server
I1225 12:29:33.839173 1458243 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh pgrep buildkitd: exit status 1 (250.787927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image build -t localhost/my-image:functional-467117 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 image build -t localhost/my-image:functional-467117 testdata/build --alsologtostderr: (4.336048844s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-467117 image build -t localhost/my-image:functional-467117 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7704ff39f36
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-467117
--> 7c6fb88aa3d
Successfully tagged localhost/my-image:functional-467117
7c6fb88aa3d033472273fe9f90ab78d0d44ce861392bdde2f92d0283ba908df6
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-467117 image build -t localhost/my-image:functional-467117 testdata/build --alsologtostderr:
I1225 12:29:34.155828 1458304 out.go:296] Setting OutFile to fd 1 ...
I1225 12:29:34.156125 1458304 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1225 12:29:34.156136 1458304 out.go:309] Setting ErrFile to fd 2...
I1225 12:29:34.156143 1458304 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1225 12:29:34.156367 1458304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
I1225 12:29:34.156991 1458304 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1225 12:29:34.157653 1458304 config.go:182] Loaded profile config "functional-467117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1225 12:29:34.158078 1458304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1225 12:29:34.158162 1458304 main.go:141] libmachine: Launching plugin server for driver kvm2
I1225 12:29:34.173792 1458304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42875
I1225 12:29:34.174280 1458304 main.go:141] libmachine: () Calling .GetVersion
I1225 12:29:34.174865 1458304 main.go:141] libmachine: Using API Version  1
I1225 12:29:34.174891 1458304 main.go:141] libmachine: () Calling .SetConfigRaw
I1225 12:29:34.175293 1458304 main.go:141] libmachine: () Calling .GetMachineName
I1225 12:29:34.175517 1458304 main.go:141] libmachine: (functional-467117) Calling .GetState
I1225 12:29:34.177516 1458304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1225 12:29:34.177557 1458304 main.go:141] libmachine: Launching plugin server for driver kvm2
I1225 12:29:34.193099 1458304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43041
I1225 12:29:34.193539 1458304 main.go:141] libmachine: () Calling .GetVersion
I1225 12:29:34.194028 1458304 main.go:141] libmachine: Using API Version  1
I1225 12:29:34.194056 1458304 main.go:141] libmachine: () Calling .SetConfigRaw
I1225 12:29:34.194409 1458304 main.go:141] libmachine: () Calling .GetMachineName
I1225 12:29:34.194621 1458304 main.go:141] libmachine: (functional-467117) Calling .DriverName
I1225 12:29:34.194854 1458304 ssh_runner.go:195] Run: systemctl --version
I1225 12:29:34.194893 1458304 main.go:141] libmachine: (functional-467117) Calling .GetSSHHostname
I1225 12:29:34.197844 1458304 main.go:141] libmachine: (functional-467117) DBG | domain functional-467117 has defined MAC address 52:54:00:49:22:1d in network mk-functional-467117
I1225 12:29:34.198255 1458304 main.go:141] libmachine: (functional-467117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:22:1d", ip: ""} in network mk-functional-467117: {Iface:virbr1 ExpiryTime:2023-12-25 13:26:04 +0000 UTC Type:0 Mac:52:54:00:49:22:1d Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-467117 Clientid:01:52:54:00:49:22:1d}
I1225 12:29:34.198286 1458304 main.go:141] libmachine: (functional-467117) DBG | domain functional-467117 has defined IP address 192.168.39.76 and MAC address 52:54:00:49:22:1d in network mk-functional-467117
I1225 12:29:34.198530 1458304 main.go:141] libmachine: (functional-467117) Calling .GetSSHPort
I1225 12:29:34.198751 1458304 main.go:141] libmachine: (functional-467117) Calling .GetSSHKeyPath
I1225 12:29:34.198922 1458304 main.go:141] libmachine: (functional-467117) Calling .GetSSHUsername
I1225 12:29:34.199089 1458304 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/functional-467117/id_rsa Username:docker}
I1225 12:29:34.318544 1458304 build_images.go:151] Building image from path: /tmp/build.1051258884.tar
I1225 12:29:34.318659 1458304 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1225 12:29:34.425285 1458304 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1051258884.tar
I1225 12:29:34.441516 1458304 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1051258884.tar: stat -c "%s %y" /var/lib/minikube/build/build.1051258884.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1051258884.tar': No such file or directory
I1225 12:29:34.441560 1458304 ssh_runner.go:362] scp /tmp/build.1051258884.tar --> /var/lib/minikube/build/build.1051258884.tar (3072 bytes)
I1225 12:29:34.499348 1458304 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1051258884
I1225 12:29:34.520869 1458304 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1051258884 -xf /var/lib/minikube/build/build.1051258884.tar
I1225 12:29:34.548188 1458304 crio.go:297] Building image: /var/lib/minikube/build/build.1051258884
I1225 12:29:34.548306 1458304 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-467117 /var/lib/minikube/build/build.1051258884 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1225 12:29:38.377350 1458304 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-467117 /var/lib/minikube/build/build.1051258884 --cgroup-manager=cgroupfs: (3.829002826s)
I1225 12:29:38.377472 1458304 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1051258884
I1225 12:29:38.404916 1458304 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1051258884.tar
I1225 12:29:38.425222 1458304 build_images.go:207] Built localhost/my-image:functional-467117 from /tmp/build.1051258884.tar
I1225 12:29:38.425264 1458304 build_images.go:123] succeeded building to: functional-467117
I1225 12:29:38.425269 1458304 build_images.go:124] failed building to: 
I1225 12:29:38.425364 1458304 main.go:141] libmachine: Making call to close driver server
I1225 12:29:38.425383 1458304 main.go:141] libmachine: (functional-467117) Calling .Close
I1225 12:29:38.425737 1458304 main.go:141] libmachine: Successfully made call to close driver server
I1225 12:29:38.425763 1458304 main.go:141] libmachine: Making call to close connection to plugin binary
I1225 12:29:38.425786 1458304 main.go:141] libmachine: (functional-467117) DBG | Closing plugin on server side
I1225 12:29:38.425822 1458304 main.go:141] libmachine: Making call to close driver server
I1225 12:29:38.425838 1458304 main.go:141] libmachine: (functional-467117) Calling .Close
I1225 12:29:38.426085 1458304 main.go:141] libmachine: Successfully made call to close driver server
I1225 12:29:38.426099 1458304 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-467117
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image load --daemon gcr.io/google-containers/addon-resizer:functional-467117 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 image load --daemon gcr.io/google-containers/addon-resizer:functional-467117 --alsologtostderr: (4.49557596s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image load --daemon gcr.io/google-containers/addon-resizer:functional-467117 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 image load --daemon gcr.io/google-containers/addon-resizer:functional-467117 --alsologtostderr: (2.731043401s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 service list -o json
functional_test.go:1493: Took "452.860094ms" to run "out/minikube-linux-amd64 -p functional-467117 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 service --namespace=default --https --url hello-node
E1225 12:29:09.908191 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
functional_test.go:1521: found endpoint: https://192.168.39.76:30874
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.76:30874
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image save gcr.io/google-containers/addon-resizer:functional-467117 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 image save gcr.io/google-containers/addon-resizer:functional-467117 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (3.389486086s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image rm gcr.io/google-containers/addon-resizer:functional-467117 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.358849929s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-467117
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 image save --daemon gcr.io/google-containers/addon-resizer:functional-467117 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-467117 image save --daemon gcr.io/google-containers/addon-resizer:functional-467117 --alsologtostderr: (1.20655322s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-467117
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "294.440464ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "78.881805ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-467117 /tmp/TestFunctionalparallelMountCmdany-port963594230/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1703507369727045418" to /tmp/TestFunctionalparallelMountCmdany-port963594230/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1703507369727045418" to /tmp/TestFunctionalparallelMountCmdany-port963594230/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1703507369727045418" to /tmp/TestFunctionalparallelMountCmdany-port963594230/001/test-1703507369727045418
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (247.626607ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 25 12:29 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 25 12:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 25 12:29 test-1703507369727045418
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh cat /mount-9p/test-1703507369727045418
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-467117 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dabdbc8f-8ae8-46cf-8ce0-0034afe24a28] Pending
helpers_test.go:344: "busybox-mount" [dabdbc8f-8ae8-46cf-8ce0-0034afe24a28] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dabdbc8f-8ae8-46cf-8ce0-0034afe24a28] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dabdbc8f-8ae8-46cf-8ce0-0034afe24a28] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005339764s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-467117 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-467117 /tmp/TestFunctionalparallelMountCmdany-port963594230/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "287.429209ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "61.567347ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-467117 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2525179329/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-467117 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2525179329/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-467117 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2525179329/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T" /mount1: exit status 1 (254.615473ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-467117 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-467117 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-467117 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2525179329/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-467117 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2525179329/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-467117 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2525179329/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-467117
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-467117
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-467117
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (78.25s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-441885 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1225 12:30:29.272738 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-441885 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.251389514s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (78.25s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-441885 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-441885 addons enable ingress --alsologtostderr -v=5: (12.963509312s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.96s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-441885 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.77s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-721648 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1225 12:34:35.034765 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 12:34:37.671639 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:35:18.632025 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-721648 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m5.769053904s)
--- PASS: TestJSONOutput/start/Command (65.77s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-721648 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-721648 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-721648 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-721648 --output=json --user=testUser: (7.113355976s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-017992 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-017992 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.322138ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"781d9ace-889f-491d-94b4-02990164bdd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-017992] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a06c8da-05f9-4b51-9402-b8644056c7a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17847"}}
	{"specversion":"1.0","id":"3076639d-9924-4bcf-8249-31cb4e4f8582","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6eb65f8b-e3e4-49ff-bf14-b8631d538952","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig"}}
	{"specversion":"1.0","id":"0b0969b0-1362-4810-a902-ed3daa5585ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube"}}
	{"specversion":"1.0","id":"41c1953e-b3af-47f5-a016-3922d4cba3b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6f462fa2-ec46-45b8-a0fc-064c4a672746","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3c473afd-8d9d-498f-9594-6d9dbdcf763a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-017992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-017992
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (100.58s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-346439 --driver=kvm2  --container-runtime=crio
E1225 12:36:26.362749 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:36:26.368171 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:36:26.378534 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:36:26.398892 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:36:26.439260 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:36:26.519671 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:36:26.680565 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:36:27.000914 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:36:27.642003 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:36:28.922583 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:36:31.483190 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-346439 --driver=kvm2  --container-runtime=crio: (48.880970421s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-349284 --driver=kvm2  --container-runtime=crio
E1225 12:36:36.604001 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:36:40.554673 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:36:46.845040 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:37:07.325370 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-349284 --driver=kvm2  --container-runtime=crio: (48.679079324s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-346439
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-349284
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-349284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-349284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-349284: (1.055167939s)
helpers_test.go:175: Cleaning up "first-346439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-346439
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-346439: (1.019237614s)
--- PASS: TestMinikubeProfile (100.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-537748 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1225 12:37:48.285783 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-537748 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.50722065s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-537748 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-537748 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-555001 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-555001 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.617093673s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-555001 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-555001 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-537748 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-555001 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-555001 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-555001
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-555001: (1.236266401s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-555001
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-555001: (25.432848823s)
--- PASS: TestMountStart/serial/RestartStopped (26.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-555001 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-555001 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-544936 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1225 12:38:56.706601 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:39:07.347785 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 12:39:10.206973 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:39:24.395029 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-544936 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.928663292s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-544936 -- rollout status deployment/busybox: (2.507195871s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- exec busybox-5bc68d56bd-qn48b -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- exec busybox-5bc68d56bd-z5f74 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- exec busybox-5bc68d56bd-qn48b -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- exec busybox-5bc68d56bd-z5f74 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- exec busybox-5bc68d56bd-qn48b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-544936 -- exec busybox-5bc68d56bd-z5f74 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.40s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-544936 -v 3 --alsologtostderr
E1225 12:41:26.362924 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-544936 -v 3 --alsologtostderr: (43.107239918s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.76s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-544936 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 cp testdata/cp-test.txt multinode-544936:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 cp multinode-544936:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2589582466/001/cp-test_multinode-544936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 cp multinode-544936:/home/docker/cp-test.txt multinode-544936-m02:/home/docker/cp-test_multinode-544936_multinode-544936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936-m02 "sudo cat /home/docker/cp-test_multinode-544936_multinode-544936-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 cp multinode-544936:/home/docker/cp-test.txt multinode-544936-m03:/home/docker/cp-test_multinode-544936_multinode-544936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936-m03 "sudo cat /home/docker/cp-test_multinode-544936_multinode-544936-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 cp testdata/cp-test.txt multinode-544936-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 cp multinode-544936-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2589582466/001/cp-test_multinode-544936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 cp multinode-544936-m02:/home/docker/cp-test.txt multinode-544936:/home/docker/cp-test_multinode-544936-m02_multinode-544936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936 "sudo cat /home/docker/cp-test_multinode-544936-m02_multinode-544936.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 cp multinode-544936-m02:/home/docker/cp-test.txt multinode-544936-m03:/home/docker/cp-test_multinode-544936-m02_multinode-544936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936-m03 "sudo cat /home/docker/cp-test_multinode-544936-m02_multinode-544936-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 cp testdata/cp-test.txt multinode-544936-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 cp multinode-544936-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2589582466/001/cp-test_multinode-544936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 cp multinode-544936-m03:/home/docker/cp-test.txt multinode-544936:/home/docker/cp-test_multinode-544936-m03_multinode-544936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936 "sudo cat /home/docker/cp-test_multinode-544936-m03_multinode-544936.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 cp multinode-544936-m03:/home/docker/cp-test.txt multinode-544936-m02:/home/docker/cp-test_multinode-544936-m03_multinode-544936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 ssh -n multinode-544936-m02 "sudo cat /home/docker/cp-test_multinode-544936-m03_multinode-544936-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-544936 node stop m03: (2.099264137s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-544936 status: exit status 7 (470.806948ms)

                                                
                                                
-- stdout --
	multinode-544936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-544936-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-544936-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-544936 status --alsologtostderr: exit status 7 (468.724556ms)

                                                
                                                
-- stdout --
	multinode-544936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-544936-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-544936-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 12:41:42.496016 1465758 out.go:296] Setting OutFile to fd 1 ...
	I1225 12:41:42.496170 1465758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:41:42.496178 1465758 out.go:309] Setting ErrFile to fd 2...
	I1225 12:41:42.496183 1465758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 12:41:42.496360 1465758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 12:41:42.496525 1465758 out.go:303] Setting JSON to false
	I1225 12:41:42.496553 1465758 mustload.go:65] Loading cluster: multinode-544936
	I1225 12:41:42.496606 1465758 notify.go:220] Checking for updates...
	I1225 12:41:42.496923 1465758 config.go:182] Loaded profile config "multinode-544936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 12:41:42.496938 1465758 status.go:255] checking status of multinode-544936 ...
	I1225 12:41:42.497400 1465758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:41:42.497463 1465758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:41:42.518306 1465758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39545
	I1225 12:41:42.518796 1465758 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:41:42.519465 1465758 main.go:141] libmachine: Using API Version  1
	I1225 12:41:42.519501 1465758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:41:42.519845 1465758 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:41:42.520019 1465758 main.go:141] libmachine: (multinode-544936) Calling .GetState
	I1225 12:41:42.521556 1465758 status.go:330] multinode-544936 host status = "Running" (err=<nil>)
	I1225 12:41:42.521578 1465758 host.go:66] Checking if "multinode-544936" exists ...
	I1225 12:41:42.521870 1465758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:41:42.521910 1465758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:41:42.538727 1465758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
	I1225 12:41:42.539178 1465758 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:41:42.539673 1465758 main.go:141] libmachine: Using API Version  1
	I1225 12:41:42.539695 1465758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:41:42.540010 1465758 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:41:42.540195 1465758 main.go:141] libmachine: (multinode-544936) Calling .GetIP
	I1225 12:41:42.543515 1465758 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:41:42.543958 1465758 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:41:42.543998 1465758 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:41:42.544104 1465758 host.go:66] Checking if "multinode-544936" exists ...
	I1225 12:41:42.544386 1465758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:41:42.544425 1465758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:41:42.559711 1465758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46557
	I1225 12:41:42.560190 1465758 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:41:42.560660 1465758 main.go:141] libmachine: Using API Version  1
	I1225 12:41:42.560684 1465758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:41:42.561032 1465758 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:41:42.561273 1465758 main.go:141] libmachine: (multinode-544936) Calling .DriverName
	I1225 12:41:42.561486 1465758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 12:41:42.561511 1465758 main.go:141] libmachine: (multinode-544936) Calling .GetSSHHostname
	I1225 12:41:42.564387 1465758 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:41:42.564834 1465758 main.go:141] libmachine: (multinode-544936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:ee:9c", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:39:03 +0000 UTC Type:0 Mac:52:54:00:c0:ee:9c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-544936 Clientid:01:52:54:00:c0:ee:9c}
	I1225 12:41:42.564866 1465758 main.go:141] libmachine: (multinode-544936) DBG | domain multinode-544936 has defined IP address 192.168.39.21 and MAC address 52:54:00:c0:ee:9c in network mk-multinode-544936
	I1225 12:41:42.564981 1465758 main.go:141] libmachine: (multinode-544936) Calling .GetSSHPort
	I1225 12:41:42.565192 1465758 main.go:141] libmachine: (multinode-544936) Calling .GetSSHKeyPath
	I1225 12:41:42.565361 1465758 main.go:141] libmachine: (multinode-544936) Calling .GetSSHUsername
	I1225 12:41:42.565536 1465758 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936/id_rsa Username:docker}
	I1225 12:41:42.654602 1465758 ssh_runner.go:195] Run: systemctl --version
	I1225 12:41:42.660997 1465758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:41:42.677071 1465758 kubeconfig.go:92] found "multinode-544936" server: "https://192.168.39.21:8443"
	I1225 12:41:42.677106 1465758 api_server.go:166] Checking apiserver status ...
	I1225 12:41:42.677151 1465758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 12:41:42.690675 1465758 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	I1225 12:41:42.703572 1465758 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/podb7cd9addac4657510db86c61386c4e6f/crio-56252699573fb5c34b37211ca1a9ececabb95cc435645ed96571c2488913e82e"
	I1225 12:41:42.703665 1465758 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb7cd9addac4657510db86c61386c4e6f/crio-56252699573fb5c34b37211ca1a9ececabb95cc435645ed96571c2488913e82e/freezer.state
	I1225 12:41:42.714559 1465758 api_server.go:204] freezer state: "THAWED"
	I1225 12:41:42.714598 1465758 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I1225 12:41:42.719635 1465758 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I1225 12:41:42.719664 1465758 status.go:421] multinode-544936 apiserver status = Running (err=<nil>)
	I1225 12:41:42.719674 1465758 status.go:257] multinode-544936 status: &{Name:multinode-544936 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1225 12:41:42.719690 1465758 status.go:255] checking status of multinode-544936-m02 ...
	I1225 12:41:42.720028 1465758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:41:42.720082 1465758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:41:42.735823 1465758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I1225 12:41:42.736334 1465758 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:41:42.736877 1465758 main.go:141] libmachine: Using API Version  1
	I1225 12:41:42.736899 1465758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:41:42.737228 1465758 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:41:42.737466 1465758 main.go:141] libmachine: (multinode-544936-m02) Calling .GetState
	I1225 12:41:42.739037 1465758 status.go:330] multinode-544936-m02 host status = "Running" (err=<nil>)
	I1225 12:41:42.739068 1465758 host.go:66] Checking if "multinode-544936-m02" exists ...
	I1225 12:41:42.739496 1465758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:41:42.739552 1465758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:41:42.755384 1465758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I1225 12:41:42.755914 1465758 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:41:42.756493 1465758 main.go:141] libmachine: Using API Version  1
	I1225 12:41:42.756524 1465758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:41:42.756840 1465758 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:41:42.757094 1465758 main.go:141] libmachine: (multinode-544936-m02) Calling .GetIP
	I1225 12:41:42.759970 1465758 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:41:42.760357 1465758 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:41:42.760377 1465758 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:41:42.760494 1465758 host.go:66] Checking if "multinode-544936-m02" exists ...
	I1225 12:41:42.760819 1465758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:41:42.760870 1465758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:41:42.776452 1465758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I1225 12:41:42.776917 1465758 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:41:42.777481 1465758 main.go:141] libmachine: Using API Version  1
	I1225 12:41:42.777503 1465758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:41:42.777803 1465758 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:41:42.777962 1465758 main.go:141] libmachine: (multinode-544936-m02) Calling .DriverName
	I1225 12:41:42.778170 1465758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 12:41:42.778193 1465758 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHHostname
	I1225 12:41:42.780908 1465758 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:41:42.781372 1465758 main.go:141] libmachine: (multinode-544936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ce:ff", ip: ""} in network mk-multinode-544936: {Iface:virbr1 ExpiryTime:2023-12-25 13:40:09 +0000 UTC Type:0 Mac:52:54:00:7c:ce:ff Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-544936-m02 Clientid:01:52:54:00:7c:ce:ff}
	I1225 12:41:42.781411 1465758 main.go:141] libmachine: (multinode-544936-m02) DBG | domain multinode-544936-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:7c:ce:ff in network mk-multinode-544936
	I1225 12:41:42.781510 1465758 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHPort
	I1225 12:41:42.781719 1465758 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHKeyPath
	I1225 12:41:42.781894 1465758 main.go:141] libmachine: (multinode-544936-m02) Calling .GetSSHUsername
	I1225 12:41:42.782007 1465758 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17847-1442600/.minikube/machines/multinode-544936-m02/id_rsa Username:docker}
	I1225 12:41:42.870229 1465758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 12:41:42.884083 1465758 status.go:257] multinode-544936-m02 status: &{Name:multinode-544936-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1225 12:41:42.884127 1465758 status.go:255] checking status of multinode-544936-m03 ...
	I1225 12:41:42.884511 1465758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1225 12:41:42.884574 1465758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1225 12:41:42.900902 1465758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36689
	I1225 12:41:42.901360 1465758 main.go:141] libmachine: () Calling .GetVersion
	I1225 12:41:42.901876 1465758 main.go:141] libmachine: Using API Version  1
	I1225 12:41:42.901900 1465758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1225 12:41:42.902196 1465758 main.go:141] libmachine: () Calling .GetMachineName
	I1225 12:41:42.902380 1465758 main.go:141] libmachine: (multinode-544936-m03) Calling .GetState
	I1225 12:41:42.904053 1465758 status.go:330] multinode-544936-m03 host status = "Stopped" (err=<nil>)
	I1225 12:41:42.904071 1465758 status.go:343] host is not running, skipping remaining checks
	I1225 12:41:42.904087 1465758 status.go:257] multinode-544936-m03 status: &{Name:multinode-544936-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 node start m03 --alsologtostderr
E1225 12:41:54.047990 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-544936 node start m03 --alsologtostderr: (28.814570132s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-544936 node delete m03: (1.063449824s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (448.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-544936 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1225 12:56:26.363472 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 12:58:56.706476 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 12:59:07.348199 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 13:01:26.362806 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 13:02:10.398640 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-544936 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m27.470022839s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-544936 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (448.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-544936
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-544936-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-544936-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (81.492849ms)

                                                
                                                
-- stdout --
	* [multinode-544936-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-544936-m02' is duplicated with machine name 'multinode-544936-m02' in profile 'multinode-544936'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-544936-m03 --driver=kvm2  --container-runtime=crio
E1225 13:03:56.706133 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 13:04:07.347679 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-544936-m03 --driver=kvm2  --container-runtime=crio: (47.30380789s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-544936
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-544936: exit status 80 (257.95555ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-544936
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-544936-m03 already exists in multinode-544936-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-544936-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-544936-m03: (1.031470129s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.74s)

                                                
                                    
x
+
TestScheduledStopUnix (124.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-430809 --memory=2048 --driver=kvm2  --container-runtime=crio
E1225 13:09:07.348031 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 13:09:29.411828 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-430809 --memory=2048 --driver=kvm2  --container-runtime=crio: (52.294423183s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-430809 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-430809 -n scheduled-stop-430809
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-430809 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-430809 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-430809 -n scheduled-stop-430809
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-430809
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-430809 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-430809
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-430809: exit status 7 (87.288959ms)

                                                
                                                
-- stdout --
	scheduled-stop-430809
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-430809 -n scheduled-stop-430809
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-430809 -n scheduled-stop-430809: exit status 7 (86.753933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-430809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-430809
--- PASS: TestScheduledStopUnix (124.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (180.65s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-435411 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-435411 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m43.085987747s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-435411
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-435411: (6.13138883s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-435411 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-435411 status --format={{.Host}}: exit status 7 (87.681084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-435411 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-435411 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.978965579s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-435411 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-435411 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-435411 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (126.509353ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-435411] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-435411
	    minikube start -p kubernetes-upgrade-435411 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4354112 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-435411 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-435411 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-435411 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.979396975s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-435411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-435411
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-435411: (1.18161562s)
--- PASS: TestKubernetesUpgrade (180.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-935850 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-935850 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (109.154133ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-935850] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (106.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-935850 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-935850 --driver=kvm2  --container-runtime=crio: (1m45.7187287s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-935850 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (106.03s)

                                                
                                    
x
+
TestPause/serial/Start (138.31s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-871992 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-871992 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m18.313353781s)
--- PASS: TestPause/serial/Start (138.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-712615 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-712615 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (147.546919ms)

                                                
                                                
-- stdout --
	* [false-712615] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 13:12:50.228120 1474916 out.go:296] Setting OutFile to fd 1 ...
	I1225 13:12:50.228347 1474916 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:12:50.228361 1474916 out.go:309] Setting ErrFile to fd 2...
	I1225 13:12:50.228368 1474916 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1225 13:12:50.228676 1474916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17847-1442600/.minikube/bin
	I1225 13:12:50.229444 1474916 out.go:303] Setting JSON to false
	I1225 13:12:50.230815 1474916 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":158123,"bootTime":1703351847,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 13:12:50.230925 1474916 start.go:138] virtualization: kvm guest
	I1225 13:12:50.233610 1474916 out.go:177] * [false-712615] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1225 13:12:50.235832 1474916 out.go:177]   - MINIKUBE_LOCATION=17847
	I1225 13:12:50.235897 1474916 notify.go:220] Checking for updates...
	I1225 13:12:50.238894 1474916 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 13:12:50.240756 1474916 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17847-1442600/kubeconfig
	I1225 13:12:50.242243 1474916 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17847-1442600/.minikube
	I1225 13:12:50.243731 1474916 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 13:12:50.245342 1474916 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 13:12:50.247373 1474916 config.go:182] Loaded profile config "NoKubernetes-935850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:12:50.247497 1474916 config.go:182] Loaded profile config "pause-871992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1225 13:12:50.247565 1474916 config.go:182] Loaded profile config "running-upgrade-941659": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1225 13:12:50.247680 1474916 driver.go:392] Setting default libvirt URI to qemu:///system
	I1225 13:12:50.290235 1474916 out.go:177] * Using the kvm2 driver based on user configuration
	I1225 13:12:50.291763 1474916 start.go:298] selected driver: kvm2
	I1225 13:12:50.291786 1474916 start.go:902] validating driver "kvm2" against <nil>
	I1225 13:12:50.291802 1474916 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 13:12:50.294176 1474916 out.go:177] 
	W1225 13:12:50.295707 1474916 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1225 13:12:50.297217 1474916 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-712615 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-712615

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-712615

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-712615

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-712615

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-712615

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-712615

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-712615

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-712615

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-712615

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-712615

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-712615

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-712615" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-712615" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-712615

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-712615"

                                                
                                                
----------------------- debugLogs end: false-712615 [took: 4.66890743s] --------------------------------
helpers_test.go:175: Cleaning up "false-712615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-712615
--- PASS: TestNetworkPlugins/group/false (5.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-935850 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-935850 --no-kubernetes --driver=kvm2  --container-runtime=crio: (9.948296057s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-935850 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-935850 status -o json: exit status 2 (293.636881ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-935850","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-935850
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-935850: (1.117968965s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-935850 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-935850 --no-kubernetes --driver=kvm2  --container-runtime=crio: (51.306667988s)
--- PASS: TestNoKubernetes/serial/Start (51.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-935850 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-935850 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.73036ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.64354415s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-935850
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-935850: (1.223562673s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (70.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-935850 --driver=kvm2  --container-runtime=crio
E1225 13:14:07.348224 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-935850 --driver=kvm2  --container-runtime=crio: (1m10.260045094s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (70.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (87.82s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-871992 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-871992 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m27.793561224s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (87.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-935850 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-935850 "sudo systemctl is-active --quiet service kubelet": exit status 1 (241.49351ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                    
x
+
TestPause/serial/Pause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-871992 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.91s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-871992 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-871992 --output=json --layout=cluster: exit status 2 (357.924763ms)

                                                
                                                
-- stdout --
	{"Name":"pause-871992","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-871992","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-871992 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.85s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.07s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-871992 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-871992 --alsologtostderr -v=5: (1.069256272s)
--- PASS: TestPause/serial/PauseAgain (1.07s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-871992 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-871992 --alsologtostderr -v=5: (1.119635785s)
--- PASS: TestPause/serial/DeletePaused (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-198979 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1225 13:16:26.363375 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-198979 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m19.154415471s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (139.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (90.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-330063 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-330063 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m30.984432437s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (90.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-198979 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af0877b6-43de-4c64-b5ac-279fa3325551] Pending
helpers_test.go:344: "busybox" [af0877b6-43de-4c64-b5ac-279fa3325551] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af0877b6-43de-4c64-b5ac-279fa3325551] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.006332484s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-198979 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-198979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-198979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.175363529s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-198979 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-330063 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7a84e545-a50b-403e-9963-1bf5157d9cde] Pending
helpers_test.go:344: "busybox" [7a84e545-a50b-403e-9963-1bf5157d9cde] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7a84e545-a50b-403e-9963-1bf5157d9cde] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004326731s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-330063 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-330063 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-330063 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (128.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-880612 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-880612 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m8.782348472s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (128.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-176938
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (124.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-344803 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-344803 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m4.11608662s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (124.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (426.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-198979 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1225 13:21:26.363440 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-198979 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (7m6.683966061s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-198979 -n old-k8s-version-198979
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (426.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-880612 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [22ab1036-0223-4df4-8c3d-ea4eb111089c] Pending
helpers_test.go:344: "busybox" [22ab1036-0223-4df4-8c3d-ea4eb111089c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [22ab1036-0223-4df4-8c3d-ea4eb111089c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004399961s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-880612 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (552.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-330063 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-330063 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (9m11.771270409s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-330063 -n no-preload-330063
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (552.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-880612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-880612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.126891952s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-880612 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-344803 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [68f206da-9faa-41ef-a232-665a04743085] Pending
helpers_test.go:344: "busybox" [68f206da-9faa-41ef-a232-665a04743085] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [68f206da-9faa-41ef-a232-665a04743085] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004847673s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-344803 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-344803 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-344803 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.149052544s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-344803 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (401.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-880612 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-880612 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (6m41.477949499s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880612 -n embed-certs-880612
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (401.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (706.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-344803 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1225 13:26:09.413045 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 13:26:26.363531 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-344803 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (11m45.816389338s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-344803 -n default-k8s-diff-port-344803
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (706.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-058636 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-058636 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (59.253814458s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-058636 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-058636 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.485292809s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1225 13:48:12.755563 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:12.760866 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:12.771058 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:12.791194 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:12.832373 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:12.912760 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:13.073625 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:13.394271 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:14.034517 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:15.315579 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:17.876532 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:22.997518 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:33.237977 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:53.719133 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:48:56.706172 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m43.419750584s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (414.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-058636 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1225 13:49:31.704455 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-058636 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (6m53.681539726s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-058636 -n newest-cni-058636
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (414.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-880612 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-880612 --alsologtostderr -v=1
E1225 13:49:34.679891 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880612 -n embed-certs-880612
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880612 -n embed-certs-880612: exit status 2 (286.418183ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-880612 -n embed-certs-880612
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-880612 -n embed-certs-880612: exit status 2 (298.098805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-880612 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880612 -n embed-certs-880612
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-880612 -n embed-certs-880612
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-712615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-712615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q66m8" [3cd1dfa9-1af2-4c49-b619-65483b678f11] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q66m8" [3cd1dfa9-1af2-4c49-b619-65483b678f11] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004118238s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (342.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1225 13:49:41.944715 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (5m42.691602403s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (342.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-712615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (367.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (6m7.509617767s)
--- PASS: TestNetworkPlugins/group/calico/Start (367.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (364.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1225 13:50:43.385473 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:50:56.600710 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:51:26.363096 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
E1225 13:52:05.307901 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:52:10.400083 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 13:52:28.378065 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:52:28.383398 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:52:28.393713 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:52:28.414111 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:52:28.454480 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:52:28.534941 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:52:28.695499 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:52:29.016265 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:52:29.657324 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:52:30.937871 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:52:33.498292 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:52:38.618912 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:52:48.859098 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:53:09.339608 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:53:12.755945 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:53:40.442895 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
E1225 13:53:50.300341 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:53:56.706944 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
E1225 13:54:07.347915 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/addons-294911/client.crt: no such file or directory
E1225 13:54:21.462131 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:54:37.836679 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:54:37.842042 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:54:37.852420 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:54:37.872803 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:54:37.913098 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:54:37.993890 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:54:38.154371 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:54:38.475311 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:54:39.115527 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:54:40.396737 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:54:42.957965 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:54:48.078626 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:54:49.148170 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/no-preload-330063/client.crt: no such file or directory
E1225 13:54:58.319741 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:55:12.221364 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
E1225 13:55:18.800055 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (6m4.778009711s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (364.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vrh9p" [ba5e2438-b57b-4458-a729-8e5edd3232d1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005891356s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-712615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-712615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jtwpd" [e9e2452d-42bd-408c-9309-e4d928b82238] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jtwpd" [e9e2452d-42bd-408c-9309-e4d928b82238] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.009008072s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-712615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (105.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m45.432323187s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (105.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2gcjd" [644651b5-1e93-4de3-b23c-8b1acaa3fe14] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00732169s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-712615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-712615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zmssg" [dc7d822b-20a5-4419-ab34-ef156324dfca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zmssg" [dc7d822b-20a5-4419-ab34-ef156324dfca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006464654s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-058636 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-058636 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-058636 -n newest-cni-058636
E1225 13:56:26.363216 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/ingress-addon-legacy-441885/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-058636 -n newest-cni-058636: exit status 2 (299.687961ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-058636 -n newest-cni-058636
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-058636 -n newest-cni-058636: exit status 2 (322.95734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-058636 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-058636 -n newest-cni-058636
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-058636 -n newest-cni-058636
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.16s)
E1225 13:57:21.681952 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/auto-712615/client.crt: no such file or directory
E1225 13:57:28.378663 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (90.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m30.702353569s)
--- PASS: TestNetworkPlugins/group/flannel/Start (90.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-712615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-712615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k9zdn" [b8404795-4ec5-4281-8a5d-a25fea5fa406] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k9zdn" [b8404795-4ec5-4281-8a5d-a25fea5fa406] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.005788492s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-712615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-712615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (116.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1225 13:56:59.760797 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-712615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m56.933912996s)
--- PASS: TestNetworkPlugins/group/bridge/Start (116.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-712615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-712615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vqdmk" [52b5e496-7e63-4051-98a5-06422d72cf19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vqdmk" [52b5e496-7e63-4051-98a5-06422d72cf19] Running
E1225 13:57:56.061820 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/default-k8s-diff-port-344803/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.006977952s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-712615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wfnh4" [0699c647-baac-4be1-8017-c9dabcd744bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0061036s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-712615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-712615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q2j8f" [2139b434-2850-4e41-82b9-228179eccd81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1225 13:58:12.755594 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/old-k8s-version-198979/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-q2j8f" [2139b434-2850-4e41-82b9-228179eccd81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004871798s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-712615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-712615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-712615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s9vqz" [ece512e1-f028-43eb-aa4b-b33f9296c9bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s9vqz" [ece512e1-f028-43eb-aa4b-b33f9296c9bc] Running
E1225 13:58:56.706604 1449797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/functional-467117/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004302888s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-712615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-712615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (39/308)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
52 TestDockerFlags 0
55 TestDockerEnvContainerd 0
57 TestHyperKitDriverInstallOrUpdate 0
58 TestHyperkitDriverSkipUpgrade 0
109 TestFunctional/parallel/DockerEnv 0
110 TestFunctional/parallel/PodmanEnv 0
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
158 TestGvisorAddon 0
159 TestImageBuild 0
192 TestKicCustomNetwork 0
193 TestKicExistingNetwork 0
194 TestKicCustomSubnet 0
195 TestKicStaticIP 0
227 TestChangeNoneUser 0
230 TestScheduledStopWindows 0
232 TestSkaffold 0
234 TestInsufficientStorage 0
238 TestMissingContainerUpgrade 0
247 TestStartStop/group/disable-driver-mounts 0.17
253 TestNetworkPlugins/group/kubenet 4.37
261 TestNetworkPlugins/group/cilium 4.29
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-246503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-246503
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-712615 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-712615

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-712615

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-712615

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-712615

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-712615

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-712615

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-712615

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-712615

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-712615

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-712615

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-712615

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-712615" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-712615" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-712615

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-712615"

                                                
                                                
----------------------- debugLogs end: kubenet-712615 [took: 4.190304116s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-712615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-712615
--- SKIP: TestNetworkPlugins/group/kubenet (4.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-712615 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-712615" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17847-1442600/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 25 Dec 2023 13:12:54 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.129:8443
name: NoKubernetes-935850
contexts:
- context:
cluster: NoKubernetes-935850
extensions:
- extension:
last-update: Mon, 25 Dec 2023 13:12:54 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-935850
name: NoKubernetes-935850
current-context: NoKubernetes-935850
kind: Config
preferences: {}
users:
- name: NoKubernetes-935850
user:
client-certificate: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/NoKubernetes-935850/client.crt
client-key: /home/jenkins/minikube-integration/17847-1442600/.minikube/profiles/NoKubernetes-935850/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-712615

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-712615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-712615"

                                                
                                                
----------------------- debugLogs end: cilium-712615 [took: 4.129813824s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-712615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-712615
--- SKIP: TestNetworkPlugins/group/cilium (4.29s)

                                                
                                    
Copied to clipboard